Microsoft CTO Kevin Scott on Bing’s quest to beat Google and the future of AI art
I co-hosted the Code Conference last week, and today’s episode is one of my favorite conversations from the show: Microsoft CTO and EVP of AI Kevin Scott. If you caught Kevin on Decoder a few months ago, you know that he and I love talking about technology together. I really appreciate that he thinks about the relationship between technology and culture as much as we do at The Verge, and it was great to add the energy from the live Code audience to that dynamic.
Kevin and I talked about how things are going with Bing and Microsoft’s AI efforts in general now that the initial hype has subsided — I really wanted to know if Bing was actually stealing users from Google.
Kevin also controls the entire GPU budget at Microsoft, and access to GPUs is a hot topic across the AI world right now — especially access to Nvidia’s H100 GPU, which is what so many of the best AI models run on. Microsoft itself runs on H100s, but Kevin is keenly aware of that dependency, and while he wouldn’t confirm any rumors about Microsoft developing its own AI chips right now, he did say a switch from Nvidia to AMD or other chip vendors should be seamless for Microsoft’s customers if the company ever does make that leap.
I also asked Kevin some pretty philosophical questions about AI: why would you write a song or a book when AI is out there making custom content for other people? Well, it’s because Kevin thinks the AI is still “terrible” at it for now, as Kevin found out firsthand. But he also thinks that creating is just what people do, and AI will help more people become more creative. Like I said, this conversation got deep — I really like talking to Kevin.
Okay, Microsoft CTO Kevin Scott. Here we go.
This transcript has been lightly edited for length and clarity.
I could talk about literally anything with Kevin. He’s a maker. You’re a renaissance… We were talking about crimping ethernet cables before we walked out onstage — literally anything. But we got to talk about AI. So I want to just ask from the beginning: Microsoft kicked off a huge moment in the AI rush with the announcement of Bing, the integration of OpenAI into the products. There’s obviously Copilots. How is that going? Has the integration of AI into Bing led to a market share gain, led to increased usage?
Yeah, for sure. It’s small market share gains, but definitely gains in ways that we hadn’t seen before. So super interesting learnings, and a lot of interesting things coming. Like we announced DALL-E 3 integration into Bing Chat and a bunch of other things just last week, so we continue to take all of the feedback and try to improve and iterate. That team moves pretty quickly, so it’s been a really interesting platform for us to do a bunch of experimentation. And a bunch of things that we’ve learned on Bing have been directly transferable to the other Copilot products that we’re building and even the API business that’s growing super fast right now.
The context of this question is, as we sit here on the West Coast having this conversation, on the East Coast, Google is in the middle of an antitrust trial about how it might’ve unfairly created a monopoly in search. And a huge theme in that trial is, “Well, hey, Microsoft exists. If they wanted to compete, they could. We’re just so good at this that they can’t.” Do you think Bing actually creates an edge in that race right now?
I think Bing is a very good search engine. It’s the search engine that I use. I will tell you, in all honesty, I’ve been at Microsoft for six and a half years. When I got there, I was still a Google Search user for a while. And it got to the point where the combo of the Edge browser and Bing search — just because the team is constantly grinding away, trying to improve quality — was more than good enough to be my daily driver browser-plus-Edge combo. And we’ve seen a growth in market share.
And I think the only thing that anybody can ask for is that you do high-quality product work, and you want marketplaces to be fair so you can compete. And I think it’s true for big companies, small companies, individuals who are trying to break through. Just whatever it is that is that notion of fairness is what everybody’s asking for. And it’s a complicated thing to go sort out. I will not comment on what’s going on on the East Coast right now.
Yeah, broadly. But I do think we have to be asking ourselves all the time about what’s fair and how can everyone participate. Because that’s the goal at the end of the day. We’re all creating these big platforms, whether it’s search as a platform, these cloud platforms… we’re building AI platforms right now. I think everybody is very reasonable in wanting to make sure that they can use these platforms in a fair way to do awesome work.
I think the conventional wisdom is that [in] an AI-powered search experience, you ask the computer a question, it just tells you a smart answer, or it goes out and talks to other AI systems that sort of collect an answer for you. That is the future. I think if you just broadly ask people, “What should search do?” “You ask a question, you get an answer.” That really changes the idea of how the web works. The fundamental incentive structure on the web is appearing in search results. Have you thought about that with Bing?
Yeah. So I think what you want from a search engine and what you’re going to want from an agent is a little more complicated than just asking a question and getting an answer. A whole bunch of the time, what you want is you’re trying to accomplish a task, and asking questions are part of the task, but sometimes, it’s just the beginning. Sometimes, it’s in the middle.
Like you’re planning a vacation, you’re doing research on how to wring out the ethernet cables in a house you’re remodeling, whatever it is. That may involve purchasing some things or spending some time reading a pretty long thing because you can’t get the information that you need in just some small transaction that you’re having with an agent. I think it’s unclear the extent to which the dynamic will actually change. I think the particular thing is everybody is worried about referrals, and how is this going to… If the bot is giving you all the answers, what happens to referral traffic?
What’s the incentive to create new content? This is what I’m thinking about a lot.
If an AI search product can just summarize for you what I wrote in a review of the new phone, why would I ever be incentivized to create another review of a phone if no one’s ever going to visit me directly?
I don’t think that’s actually the thing that anybody wants. It’s certainly not the thing that I want, individually. There needs to be a healthy economic engine where people are all participating. They’re creating stuff, and they’re getting compensated for what they create.
Now, I think the compensation structure and how things work just evolves really rapidly. And it feels to me like, even independent of AI, things are changing very rapidly right now — like how people find an audience for the things that they’re creating, how people turn audience engagement into a real business model. On the one hand, it’s difficult because some of these funnels are hard to debug. You don’t really know what’s going on in an algorithm somewhere that’s directing traffic to your site.
So, I think that’s one of the opportunities that we can have right now in the conversation about how these AI agents are going to show up in the world. It’s not necessarily preserving exactly what that funnel looks like but being transparent about what the mechanics of it are so that if you’re going to spend a bunch of effort or try to use it as a way to acquire an audience, that you at least understand what’s going on, that it’s not arbitrary and capricious and, one day, something changes that no one told you about and you no longer know how to viably run your business.
The flip side of that is you also make a lot of tools that can create AI content. And you see these distribution platforms immediately being flooded with AI content. And something like a search engine or even training a new model being flooded with its own AI spam essentially leads to things like model collapse, leads to a drastic reduction in quality. How do you filter that stuff out?
We’ve got an increasingly good set of ways, at least on the model training side, to make sure that you’re not ingesting low-quality content, and you’re sort of recursively getting—
Is there a difference between low-quality content and AI-generated content?
Sometimes, AI-generated content is good, and sometimes, it’s not. I think it’s sort of less interesting. It’s kind of a technical problem, whether or not you’re ingesting things into your training process that are causing the performance of a trained model to become worse over time. That’s a technical thing. I think it’s an entirely solvable problem.
I think the thing that you want in general is, as a consumer of content, you just don’t want to be reading a bunch of spammy AI-generated garbage. I don’t think anyone wants that. And I would even argue… This is an interesting thing you and I haven’t chatted about, but I think the purpose of making a piece of content isn’t this flimsy transactional thing that sometimes people think it is. It is trying to put something meaningful out into the world, to communicate something that you are feeling or that you think is important to say and then trying to have some kind of connection with who’s consuming it.
So, there’s nothing about an AI being 100 percent of that interaction that seems interesting to me. I don’t know why I would want to be consuming a bunch of AI-generated content versus things that you are producing.
I think you are almost certainly going to want to use some of these AI tools to help produce content. One of the things that I did last fall when we were playing around with this stuff for the first time is: I was like, “Oh, I’ve wanted to write a science fiction book since I was a teenager, and I’ve never been able to just sort of get the activation energy.” And I started to attempt doing that with GPT-4, and it was terrible at using it in the way that you would expect. So you can’t just go into the model and say, like, “Hey, here’s an outline for a science fiction book I’d like to write. Please write chapter one.”
That’s the model today. We’re in the context of the writers strike resolving. Even in that conversation, they were not worried about the model’s capabilities today. There will be a GPT-5 and a GPT-6, right?
Correct. And I actually agree with that. But the point that I was making is the useful thing about the tool is it helped keep me in flow state. So I’ve written a nonfiction book. I’ve never written a fiction book before. So the useful thing for it was not actually producing the content but, when I got stuck, helping me get unstuck, like if I had an ever-present writing partner or an editor who had infinite amounts of time to spend with me. It’s like, “Okay, I don’t know how to name this character. Let me describe what they’re about. Give me some fun names.”
So, it was really amazing the extent to which having an AI creative partner helped unblock me. But it was still… It was all my trying to figure out how the plot of this book ought to work. And I don’t think it would be particularly interesting to me as a reader to consume a novel worth of content that was 100 percent generated by an AI, with no human touch whatsoever. I don’t even know what that’s doing.
We’ve arrived now at the nature of art, so I’m going to make a hard shift to GPUs. This is what I mean about Kevin — we can go everywhere with Kevin. I just want to make sure we hit it all.
Why do people make art? The AI moment has provided us the opportunity to ask that question in a serious way. Because the internet has basically been like, “To make money.” And I think there’s a divergence there, as our distribution channels get flooded. I just don’t know that we’ll hit the answer in the next 10 minutes.
So, the last time you and I spoke, you said something to me that I have been thinking about ever since. This man controls the entire GPU budget at Microsoft — every dollar that flows into GPUs, right here.
Well, it’s not just me. It’s… But I’m the one that resolves the hard conflicts.
Yeah, that’s control. That’s what I mean. Is that job getting easier or harder for you?
It’s easier now than when we talked last time. So we were in a moment where I think the demand… Because a bunch of AI technology had ripped onto the scene in a surprising way, and demand was far exceeding the supply of GPU capacity that the whole ecosystem could produce. That is resolving. It’s still tight, but it’s getting better every week, and we’ve got more good news ahead of us than bad on that front, which is great. It makes my job of adjudicating these very gnarly conflicts less terrible.
There was some reporting this week. You actually mentioned it before, in The information, that Microsoft is heavily invested in smaller models that require less compute. Are you bringing down the cost of compute over time?
Well, I think we are. And the thing that I will say here, which we were chatting about backstage, is when you bill one of these AI applications, you end up using a full portfolio model. So, you definitely want to have access to the big models, but for a whole bunch of reasons. If you can offload some of the work that the AI application needs to do to smaller models, you probably are going to want to do it.
And some of the motivations could be cost. Some of it could be latency. Some of them could be that you want to run part of the application locally because you don’t want to transit sensitive information to the cloud. There’s just a whole bunch of reasons why you want the flexibility to architect things where you’ve got a portfolio of these models.
And the other thing, too, is the folks at OpenAI, with some help from folks at Microsoft, have been working furiously on optimizing the big models, as well. So it’s not an either-or. You want both, and you want both to be getting cheaper and faster and more performance and higher quality over time.
Can you bring down the cost of compute?
I’m looking at Copilot in Office 365. It’s $30 a seat. That’s an insane price. I think some people are going to think it’s very valuable, but that’s not a massive market for an AI pricing scheme. Can you bring that down?
I think we can bring the underlying cost of the AI down substantially. One of the interesting things that OpenAI did this spring is they reduced the cost by a factor of 10 to developers for access to the GPT-3.5 API. That was almost entirely passing along a whole bunch of performance optimizations. So, the chips are getting better price performance-wise, generation over generation. And the software techniques that we’re using to optimize the models are bringing tons of performance without compromise to quality down. And then, you have these other techniques of how do you compose your application of small and big models that help, as well. So yeah, definitely, the cost goes down. And the price is just what value you’re creating for people. So the market sort of sets the price. And if the market tells us that the price for these things is too high, then the price goes down.
This is the first time anyone has ever priced these things, so I guess we’ll find out. Is that signal working for you?
Yeah, we’re getting really good signal about price right now. And I think the thing that you just said is important. It is very early days right now for the commercialization of generative AI. So you have a whole bunch of things that you’ve got to figure out in parallel. One of them is how do you price them, and what is the market actually for these things? And there’s no reason to overprice things. The thing that you want is everybody getting value from them, as many as humanly possible. So we’ll figure that out, I think, over time.
When I think about compute — these big models, running tools for customers — obviously, the story there is Nvidia chips, right? It’s access to H100s. It’s building capacity there. They’ve got 80 percent of the overall market share. How much do they represent for you?
Yeah, they’re… If you look at our key AI workloads, they’re a substantial fraction of our compute.
What’s your relationship with Nvidia like? Is that a good working relationship?
They are one of our most important partners. And we work with them on a daily basis, on a whole bunch of stuff, and I think the relationship is very good.
I look at Amazon, Google — they’re kind of making their own chips. I talked to the CEO of AWS a few weeks ago on Decoder. He didn’t sound thrilled that he had this existential dependency on Nvidia. They want to move to their own systems. Are you thinking about custom chips? Are you thinking about diversifying that supply chain for yourself?
Going back to the previous conversation, if you want to make sure that you’re able to price things competitively, and you want to make sure that the costs of these products that you’re building are as low as possible, competition is certainly a very good thing. I know Lisa Su, from AMD, is here at the conference. We’re doing a bunch of interesting work with Lisa, and I think they’re making increasingly compelling GPU offerings that I think are going to become more and more important in the marketplace in the coming years. I think there’s been a bunch of leaks about first-party silicon that Microsoft is building. We’ve been building silicon for a really long time now. So—
Wait, are you confirming these leaks?
I’m not confirming anything. But I will say that we’ve got a pretty substantial silicon investment that we’ve had for years. And the thing that we will do is we’ll make sure that we’re making the best choices for how we build these systems, using whatever options we have available. And the best option that’s been available over the past handful of years has been Nvidia. They have been really—
Is that because of the processing power in the chip, or is it because of the CUDA platform? Because what I’ve heard from folks, what I heard from Lisa yesterday, is that actually, what we need to do is optimize one level higher. We need to optimize at the level of PyTorch or training or inference. And CUDA is not the thing, and that’s what Nvidia’s perceived mode is. Do you agree with that? That you’re dependent on the chip? Or are you dependent on their software infrastructure? Or are you working at a level above that?
Well, I think the industry at large benefits a lot from CUDA, which they’ve been investing in for a while. So if your business is like, “I got a whole bunch of different models, and I need to performance tune all of them,” the PyTorch-CUDA combo is pretty essential. We don’t have a ton of models that we’re optimizing.
So we have a whole bunch of other tools like Triton, which is an open-source tool that OpenAI developed, and a bunch of other things that help you basically do exactly what you said, which is up-level the abstraction so that you can be developing high-performance kernels for your both inference and training workloads, where it’s easier to choose what piece of hardware you’re using. The thing to remember is even if it’s just Nvidia, you have multiple different hardware SKUs that you’re deploying in production at any point in time, and you want to make it easy to even optimize across all of those things.
So I asked Lisa yesterday, “How easy would it be for Microsoft to just switch from the Nvidia to AMD?” And she told me, “You should ask Kevin that question.” So here you are. How easy right now would it be if you needed to switch to AMD? Are you working with them on anything? And how easy would it be in the future?
Well, let me deploy my finest press training and say that if you are an API customer right now — like you’re using the Azure OpenAI API or using OpenAI’s instance of the API — you don’t have to think about what the underlying hardware looks like. It’s an API. It is presented to you to be the simplest possible way to go build an AI application on top of that API.
So yeah, not trivial to muck around with this hardware. It’s all big investments. If that’s the way that you’re building your AI application, you shouldn’t have to care. And there are a bunch of people who are not building on top of these APIs where they do have to care. And then, that’s a choice for all of them individually about how difficult they think it might be. But for us, it’s a big complicated software stack, and the only part of that that the customer sees is that API interface.
The other theme that a bunch of folks at the conference yesterday asked me to ask you about is open source. You obviously have a huge investment in your models. OpenAI has GPT. There’s a lot of action around that. On the flip side, there’s a bunch of open-source models that are really exciting. You were talking about running models locally on people’s laptops. Are these real moats around these big models right now? Or is open source going to actually just come and disrupt it over time?
Yeah, I don’t know whether it’s even important to think about the models as moats. So there are some things that we’ve done, and a path forward for the power of these models as platforms, that are just super capital intensive. And I don’t think even if you’ve got a whole bunch of breakthroughs on the software, they don’t become less capital intensive. So, whether it’s Microsoft or someone else, the thing that will have to happen with all of that capital intensity… because it’s largely about hardware and not just software, and it’s not just about what you can put on your desktop — is you have to have very large clusters of hardware to train these models. It’s hard to get scale by just sort of fragmenting a bunch of independent software efforts.
So, I think the open-source stuff is super interesting, and I think it’s going to help everybody. We’ve open-sourced this super good model called Phi that’s trending on Hugging Face as of last week. A bunch of open-source innovations we’re excited about. But I think the big models will continue to make really amazing progress for years to come.
I’ve got a few more questions. If you have questions for Kevin, please start lining up. I’d love to hear from all of you. I want to make sure we talk about authenticity and metadata, marking things as real, something you and I have talked about a lot in the past. There’s a lot of ideas about how you might mark content as real or mark it as generated by AI. We’re going to see some from Adobe later today, for sure. Have you made any progress here?
Yeah, I think we have. One of the things I think we talked about before is for the past handful of years, we’ve been building a set of cryptographic watermarking technologies and trying to work with both content producers and tool makers to see how it is we can get those cryptographic watermarks — they’re manifests that say, “This piece of content was created in this way by this entity” — and have that watermark cryptographically preserved with the content as it gets moved through transcoders and CDNs and as you’re mashing it up a bunch of different ways.
That might work for images. Can you do that for text? It feels like text is a big deal right now. A bunch of lawsuits are brewing.
Text is definitely harder. There are some things that are research-y that folks are working on, where you can, in the generation of the text, subtly add a statistical fingerprint to how you’re generating the text. But it’s much harder than visual content, where it’s easy to just hide the watermark in the noise in the pixels and not have it really alter the experience you have as a user viewing the image or the video. So it’s a tougher problem, for sure.
But it doesn’t mean that you can’t solve it. You don’t have to do it with cryptographic watermarks. You could also just say, “Hey, we’re going to adopt a set of conventions in the products that we build, where we clearly identify in the products when you have AI-generated text.” So with an email message, for instance, if you use Microsoft 365 Copilot to write an email, we can add a piece of text to that message that says… Or even there with email—
There’s nothing I want more than someone sending me an email that says it was generated from AI at the bottom. When I think about my inbox, that’s what would fix it.
Hold on, there’s like a party line of people waiting to talk to you.
Yeah, but these are all preferences. We will have to figure out what that line is.
Oh, I know what my preference for those emails is. I’m going to tell Cortana to delete ‘em right away. Fair warning to all of you. If you write me AI, it’s gone.
Audience Q&A
Nilay Patel: Alright. Please introduce yourself.
Pam Dillon: Good morning, Kevin. Pam Dillon of Preferabli. This question is not being generated by ChatGPT. We’ve been talking a lot about assimilating the world’s knowledge in a general sense. Do you think about how we’re going to start to integrate specialized bodies of knowledge areas where there’s real domain expertise? Say, for example, in medicine or health, demands a sensory consumer?
Kevin Scott: Yeah, we are thinking a lot about that. And I think there’s some interesting stuff here on the research front that shows that those expert contributions that you can make to the model’s training data, particularly in this step called reinforcement learning from human feedback, can really substantially improve the quality of the model in that domain of expertise. We’ve been thinking in particular a lot about the medical applications.
So one of my direct reports, Peter Lee, who runs Microsoft Research and who’s also a fellow at the American Medical Association, wrote a great book about medicine and GPT-4, and there’s a whole bunch of good work. And all of that is exactly what you said. It is how — through reinforcement learning, through very careful prompt engineering, through selection of training data — you can get a model to be very high performing in a particular domain. And I think we’re going to see more and more of that over time, with a whole bunch of different domains. It’s really exciting, actually.
NP: Over here, please introduce yourself.
Alex: Hi Kevin, my name is Alex. I have a question about provenance. Yesterday, the CEO of Warner Music Group, Robert Kyncl, was talking about his expectation that artists are going to get paid for work that is generated off of their original IP. Today, obviously, provenance is not given by LLMs. My question to you is from a technical standpoint: Let’s say that somebody asks to write a song that’s sort of in the style of Led Zeppelin and Bruno Mars. But in the generation, the LLM is also using music by the Black Keys because they kind of sound a lot like Led Zeppelin. Would there be a way, technically, to be able to say, from a provenance standpoint, that the Black Keys’ music was used in the generating of the output so that artist could get compensated in the future?
KS: Yeah, maybe. Although, that particular thing that you just asked, I think, is a controversial thing for human songwriters. I know there was this big lawsuit with Ed Sheeran about exactly this, where it’s pretty easy for a human songwriter to be influenced in very subtle ways. And a lot of pop songs, for instance, have a lot of harmonic similarity with one another.
So, I think you have to think about both sides of things. What is actual, AI aside, how do you measure the contribution of one thing to another? Which is hard. And then technically, if we were able to do that part of the analysis, you probably could figure out some technical solutions. It’s very easy to make sure that you are not having generations that are parroting. It’s either in whole or in snippets, so that’s possible. It’s a little bit more technically difficult, I think, to figure out, through this gigantic volume of contribution that any piece of data has, how has that influenced a particular generation.
NP: Music copyright is like… Just find me later, and we’ll talk about it. It’s one of my favorite things. Go ahead.
Gretchen Tibbits: Hi, Gretchen Tibbits, DC Advisory. Rewind slightly from the question the gentleman just asked. There’s been already some cases and some questions of the information from publishers, from creators, that have been used to train these models. Forget about generating music and the next, but that’s been trained and asking for percentages or rights or recognition of that. I’m wondering — and not asking you to comment on any active case — but philosophically, thoughts on that?
KS: Oh God, we’ve got 25 seconds on the timer like that.
No, you’re going longer. Don’t worry. We’re going to take a few more. The clock can’t save you now.
KS: So, here’s a thought exercise. By raise of hands, how many of you have read Moby Dick? So, I’m guessing that all of you who raised your hand probably read Moby Dick many, many years ago — high school, college maybe. And if I ask you, you could tell me Moby Dick is about a whale. There’s a captain. Maybe you remember his name is Ahab. Maybe he has some sort of fixation issues that he’s focusing on this animate object. You could tell me a bunch of things about Moby Dick. Some of you who are literature fans might even be able to recite a passage or two from Moby Dick exactly as they appear in the book.
None of you, I would wager, could, if I ask you [to] tell me, recite verbatim the third paragraph of page 150 of the Penguin sixth printing of Moby Dick. And these neural networks work a little bit like that. Not even in the way that a search engine does, they are not storing the content of music or books or papers that people are generating. They are ingesting some of these things. And I think everybody thinks right now — and this is part of what we will determine, I’m guessing, over the coming years — everybody thinks that all of the training that is being done right now is covered by fair use.
NP: Well, some people think that.
KS: Some people think that.
NP: Some very important people do not.
KS: And that’s the thing that will get sorted out. And I don’t know the answer to that question because it relies on judges and lawmakers, and we will sort of figure this out as a society. But the thing that the models are attempting to do isn’t… They’re not some gigantic repository of all of this content. You’re attempting to build something that, like your brain, can remember conceptually some of these things about a thing that was present in the training. And we will sort of have to see…
So let me just back all the way up and say nobody wants to… As an author myself, I don’t want to see anyone disenfranchised. The economic incentives for people to produce content and to be able to earn a living writing books and being… Especially, God forbid, folks who sit down and do the work of writing a really thoughtful, super well-researched piece of nonfiction. Or someone who pours their heart and soul into writing a piece of fiction. They need to be compensated for it. And this is a new modality of what you’re doing with content. And I think we still have some big questions to ask and answer about exactly what’s going on and what is the fair way to compensate people for what’s going on here.
And then, what’s the balance of trade, too? Because hopefully, what we’re doing is building things that will create all sorts of amazing new ways for creative people to do what they’re best at, which is creating wonderful things that other people will consume that creates connection and enhances this thing that makes us human.
NP: Alright. We have time, very quickly, for a couple more. So just very quickly, Jay, hit me.
Jay Peters: Hi, Jay Peters for The Verge. When you mentioned that you don’t want to read spammy AI-generated garbage, that made me think of this thing last month, where Microsoft’s MSN network published this kind of spammy-feeling travel article that recommended a food bank as a travel destination in Ottawa. And that was made apparently in combination with algorithmic techniques, techniques with human review. So if something whiffs that badly with human intervention, how can we trust fully in AI-generated summaries?
KS: Yeah. With that particular thing, it was less about the AI and more about how the human piece of that was working. Honestly, that would’ve been a little bit better if there’d been more AI.
NP: You’re blaming the people.
KS: No, I’m not blaming anyone. I think the diagnosis of that problem is some of these things on MSN — and I know this is true for other places — gets generated in really complicated ways. It wasn’t the case of: there was, at some point, a Columbia-trained journalist who was sitting down writing this, and all of a sudden, there was now a faulty, defective AI tool that was doing the thing that they used to do. That’s not what was going on here.
NP: Alright. Very, very quickly.
Dan Perkel: Hi, Dan Perkel, IDEO. I had a question about an exchange you had earlier about flooding the world with AI-generated content and the discussion about quality. And in the scenario you were thinking of, who’s determining the quality of that content, and how are they determining it? Because I wasn’t quite following where that was going.
KS: Well, I think you all are going to judge the quality of the content. If it’s directed at you, you’re the ultimate arbiters of, “Is this good or bad? Is it true, or is it false?” One of the seeds I will plant with you all is, one of the things that these AI tools may prove to be useful at is actually helping navigate a world where there are going to be a whole bunch of tools that are able to generate low-quality content. And having your own personal editor-in-chief that’s helping you assemble what you think are high-quality, truthful, reliable sources of information and helping you sort of walk through this ocean of information and identify those things will be, I think, super useful. I think what you all are doing, by the way — and many of you in the room, I’m sure, are in media businesses — I think having all of this content out there makes your job more important.
KS: Way more important. Because somebody has to have someone that they trust, that has high editorial standards, and who are helping figure out signal and noise. It’s absolutely true.
NP: Alright. We got to leave it here. I’m available for a very high fee. Thank you so much, Kevin. I really appreciate it.
Read the full article Here