In this episode of William Blair Thinking Presents, equity analysts Jason Ader, Arjun Bhatia, and Ralph Schackart explore the disruptive, ubiquitous subject of generative AI and its broad implications across the technology sector, the economy, and society. Leveraging their expertise in infrastructure, SaaS, and the consumer internet space, they deliver a comprehensive discussion tailored for those seeking a deeper understanding of generative AI.
[00:00:10] Chris: Welcome to William Blair Thinking Presents a new podcast series that aims to provide in-depth expertise from our award-winning equity research and capital advisory teams on today's financial and economic landscape. I'm Chris Thonis, head of Equities Marketing and Media Relations, and I'm delighted to be your host.
[00:00:24] Chris: Hi everybody. We are welcoming a very fun group of analysts today on our new episode of William Blair Thinking Presents. This includes Jason Ader, CFA, and partner at William Blair. He's the co-head of the technology, media, and communications sector. We've also got Arjun Bhatia co-group head of the technology, media, and communications sector, and research analyst, Ralph Schackart, CFA, and partner who specializes in internet and digital media.
[00:00:51] Chris: So, these three analysts just released a new report aimed at serving as a reference piece for investors looking to get a better understanding of generative AI (or “gen AI”) technology and its implications for the tech sector. The report is called, Generative AI: the New Frontier of Automation, and focuses on the generative AI value chain including everything from the infrastructure that supports AI models to the downstream application that generative AI can create. So, with that, Jason, I'm going to kick this over to you. Do you mind just providing listeners with a 500-foot view of what this report is all about and what inspired your team to pursue this particular area of AI so deeply?
[00:01:29] Jason: Yeah, absolutely, and thanks Chris for having us. You know, gen AI is all anybody's talking about right now. I've been on the road meeting with clients and like 75% of my meetings have been occupied with discussions about generative AI and long-term implications, near-term implications, but we wanted to do a deep dive into the topic to really separate the hype from the reality, for better or worse. There has been a lot of super interesting new tech in my 25 years on the sell side, but I can honestly say that the last time I felt this way about a technology's broad implications on the economy and society was back in the early 90’s with the dawn of the internet.
[00:02:06] Jason: What's even more amazing is that we're just scratching the surface with this technology. In the coming months and years, we expect to see exponential advances in the capabilities and qualities of the gen AI models that vendors are building. So, really our goal with this paper was to create an initial reference piece for investors looking to gain a better understanding of the basics of gen AI technology, why it's so groundbreaking, potential winners and losers, and then where the monetization opportunities are across the value chain.
[00:02:35] Chris: Arjun and Ralph, anything you guys want to add to that?
[00:02:39] Arjun: I think generative AI is going to impact many different aspects of our life from personal to work. AI is not new, it's always kind of been working in the background. But as ChatGPT has come out, the public has learned about generative AI. It has gone from being in the background to now being in the forefront and that's a major evolution in the AI landscape. So, we wanted to take a crack at just explaining this dynamic. Explaining what the technology could do, what impact it might have on our lives, what it might mean for businesses, what it might mean for consumers, right? And just to highlight some important questions that investors should think through as they're digging into generative AI.
[00:03:24] Ralph: The only thing I'd add, Chris, is that the way we approach this is maybe a little bit different from the infrastructure side, to the SaaS side, and to the consumer internet side. So, it's a little bit different perspectives that we’ve published perhaps with respect to some of the other publications out there.
[00:03:40] Chris: Great. This is a pretty dense report. I wanted to jump into some of the specific areas. First and foremost, you talk about how generative AI represents the next big technology platform shift. So, this is something you guys just touched on a little bit, but this is after the internet, it's after mobile, then the cloud with the potential to be as transformative as electricity or the steam engine. Can you dive into that a little bit deeper?
[00:04:06] Ralph: Sure. I'll take that one, Chris. So, when the light bulb was invented, and the first steam engine was shown to people I doubt anybody had any idea of the downstream implications as they thought through those tech advancements. I think that's exactly where we are right now with gen AI. What’s really new and different about gen AI versus historic AI is that it synthesizes data, and it generates content, which is something, candidly, the prior tech cycles didn't do. So, when you think through the implications of this profound nature of content generation, it's really hard to know what that looks like.
[00:04:42] Ralph: Also, I think what separates this technology from others is that the learnings compound and it's recursive in nature, which could lead to exponentially more powerful use cases. You're hearing about the explosive nature of gen AI but thinking five, 10 years in the future would be very difficult to see what that looks like at this point.
[00:05:01] Jason: Yeah, I would also add the last three decades I feel have led up to this moment in history, which is you have this network of networks. You have this incredible hardware performance and power today with the parallelization of computing, which allows you to create these models. You have the software algorithms, which have matured and have been developed over these last three decades and maybe more, probably since the 50’s when AI really originated.
[00:05:35] Jason: And, you have this massive amount of data that's been created partly because we have all these, these mobile devices in our pockets. So, you combine the network with the hardware, with the software, with the data, and you get gen AI.
[00:05:50] Chris: So, there's really no other technology that has evolved this quickly. Or would you say this is in line with maybe something else that could be somewhat relevant?
[00:06:00] Ralph: As Jason pointed out, it's really a combination of a collection of multiple technologies coming together at the same time. I think what's really different about this is just the compounding nature of it. Literally, every night you go to bed, it's processing trillions and trillions and trillions of prompts and all this coming together in unison has really enabled this moment.
[00:06:22] Jason: Yeah, like the new chips that just came out I think are something like 500 times more powerful than the prior ones. This is going to be exponential and like Ralph said, the machines don't rest.
[00:06:36] Chris: Right.
[00:06:38] Ralph: One thing that's worth pointing out, Chris, is I think the three of us would all agree, we've never written a report that was so dynamic and changed so much in draft mode to final print. The evening before we were literally having to edit more content because it was changing so rapidly. Just to give some perspective of how quickly things are moving.
[00:06:58] Chris: I never even thought about that, but that makes a lot of sense. We'll touch on more use case stuff a little later here because I do want to jump into that a little bit deeper. But first, before we do that, you call out data as being vital to generative AI. You quote “Generative AI models are only as good as the data on which they are trained. So, companies with unique and proprietary data sets will have a competitive advantage.” So, can you provide some examples of what these unique proprietary data sets might look like?
[00:07:30] Arjun: Yeah, if we take a step back and just think about what the technology is doing, it's basically making a very educated guess on what the next word in a sentence should be based on a prompt that a user gives it. I'm simplifying a little bit because what's happening is really trillions of calculations that are happening to generate that output.
But it's just guessing the next word in the sentence, and it's an educated guess. So it needs to be trained on something to make to be informed essentially. What it's being trained on is data, right? So, the more data it has, the higher quality the data is, which means it can make better guesses and result in a more positive user experience.
[00:08:13] Arjun: In consumer use cases, this is straightforward. The data aspect of it is pretty straightforward. You scrape the data on the open internet, and you as a user can ask a general knowledge question to generative AI, and it will give you an answer.
[00:08:30] Arjun: Where the data aspect starts getting more interesting is in enterprise use cases because that data is not readily available on the open internet. Businesses have a lot of data that is in a closed database that's behind a firewall. It's not open to be scraped, right? Every business has customer data. Let's take customer data as an example. If you can now fine-tune a foundational generative AI model on customer data for your specific company, then the generative AI can start to write an email for you in the language and in the tone and style that you use inside your company, it can generate that response based on the interaction history you have with that customer. That is something that only works if you have that past interaction history with that customer, right? And so, customer data is just one example, but you know, employee data, it's the same thing for HR use cases, right? Code, right? Software code is, it's the same concept for generating the next line of code in a program. The data is the engine that's driving this gen AI train.
[00:09:47] Chris: So, remember back, I want to say, six, 10 years ago big data was all the rage. So, I imagine these companies now have almost taken a step forward and are leading the pack as these big data type of companies. Would that be accurate?
[00:10:05] Arjun: Yeah. You know, over the last 10, 15 years, there's a lot of companies that have just built a moat around having a data asset. They've become systems of record and a lot of enterprise data is stored in those systems of record, and now they're putting that data to a different use.
[00:10:25] Arjun: It was being used in more traditional AI use cases initially. It was being used for analytics and whatnot, and now it's being used to power large language models. So those companies, right? Our thesis here is that the companies that have built this data advantage over the last 10, 15 years, the incumbents actually have a great opportunity to benefit from this gen AI wave that's coming.
[00:10:50] Chris: Ralph, we can jump back to the use cases because I think that is such a dense subject area. As far as these enterprise use cases, I know in the report you call it a productivity accelerant, which Arjun you touched on a little bit right there, especially for knowledge workers in the economy. As far as being in such an early stage of the development of these use cases, how do you see this evolving over the next few years? I mean, obviously writing emails and there's a number of things this can do, but we're talking in the last, what, how many months has it been since this has really gone to market? In the next few years, three, or five years where is this going to land? I mean, what are we talking here?
[00:11:33] Ralph: Sure, it's going quickly evolve from, let's just start with a simple search today that gives you a point solution, so to speak, to you'll be stacking multiple prompts. So as Arjun alluded to find me a customer segment that looks like this. Draft an email, send it out, and then when somebody replies “yes”, book it in my outlook calendar and then do the call. What's interesting about that simple scenario is the gen AI will learn how to be better about writing those emails. Like, why did one person reply and not somebody else? So that's sort of the learning nature of it.
That’s a very simple use case. But just on the efficiency side, the product managers we talked to today say it’s making an engineer about 30% more efficient today. But the concept of a super engineer being able to produce 5-times or 10-times content likely doesn't come into view maybe for another year or two, just depending on who you speak to but that gives you a sense of scale. Also, if you think about the productivity boost that we might get from AI, perhaps that counteracts some of the demographic shifts of slowing population growth. It certainly is going to take some time for this to mature, but overall, we think the efficiency gains from gen AI should absolutely drive incremental cost savings for most enterprises and provide a pretty-strong tailwind to profit margins.
[00:12:56] Chris: You say later in the report, maybe it's earlier in the report, you state that generative AI will be a giant new workload for the cloud, likely re-accelerating growth for cloud service providers. So, I know this is probably music to the ears of these CSPs, but do you mind jumping a bit into that?
[00:13:15] Jason: Yeah, what we were referring to there was there's been a slowdown over the last 12 months or so in cloud spending as organizations coming out of the pandemic and they're looking at their costs. I mean, obviously macro is a big factor here, but I think there was a lot of waste, a lot of inefficiency in how companies were using the cloud. So, you had this theme of optimization and rationalization right now. I think a lot of people have been wondering is this the peak growth rates for cloud? Or could we see some type of reacceleration at some point? And I think with gen AI coming onto the scene it really does provide a significant tailwind. Whether it drives significant reacceleration or not, it's hard to say, but I would suspect that all the big cloud providers are going to benefit from this movement, this phenomenon. Again, we're at the very beginning.
[00:14:13] Arjun: These are very compute, intense models. They require a lot of processing power, and every individual model has billions of parameters. They're doing trillions of calculations to generate that output.
So, we will see the computing power that's required by enterprises to run large language models accelerate pretty significantly into the near future. We're still early in this wave but we are seeing signs that it's coming and that more and more of this is going to be required as the adoption of gen AI increases.
[00:14:49] Chris: I'd love to talk about the potentially profound impacts of these use cases to knowledge workers and corporate costs. We've touched on it a little bit. Arjun I'd love for you to jump in a little bit more if you don't mind.
[00:15:01] Arjun: Yeah. In a way, the sky is the limit. Generative AI, large language models, it's kind of a blank slate, right? They’re building blocks in the tech stack. So that means you can build use cases on top of it and we're right now just seeing the initial use cases that are being developed with a search replacement tool like ChatGPT.
[00:15:26] Arjun: But eventually you're going to see it proliferate across every knowledge worker, right? Your jobs are going to be impacted by this. I'm not saying everyone's going to lose their jobs. That's not going to happen. But it's going to be a productivity accelerant in every role that you do so it can help you write emails, right? It can help you summarize your emails as well. It can help you create content to post on social media, right? And over time we can even get into the personal use case where you can have somewhat of a personal assistant that's being run by generative AI, but also customer service, right?
[00:16:06] Arjun: So, I would just say that generative AI is kind of this foundational technology, right? So, you can build on top of it. And it's very similar to the way cloud was a foundational technology, mobile was a foundational technology and in the early days of mobile, who would've predicted that it's as impactful as it is today or the same thing as cloud? But once you open that up to people to develop on top of, that's when imagination takes over and that's the limiting factor.
[00:16:37] Jason: I would also say that we're sitting here and thinking through the implications on knowledge workers but there's obviously a lot of folks that have pretty doomsday scenarios in terms of what it means for white-collar jobs over the next five to 10 years. I don't think any of us, Arjun, Ralph, or myself really are in a position to really forecast and predict exactly what's going to happen because this stuff is moving really fast. But when you have foundational changes and platform shifts like the industrial revolution and things like that, a lot of jobs do go away, but a lot of new jobs are created. I think in aggregate we're not overly concerned about this, but I think it's also an important caveat to say that we just really don't know at this point. It's too soon to tell.
[00:17:29] Chris: It's actually a perfect segue. I know you touch on risk a little bit. We could always harp on existential risk, which media loves to do and I think a lot of people love to do, but the reality is, there are more moderate risks associated with this. If you don't mind just walking through some of what you laid out in the report in that regard, that could be of interest to some of the listeners.
[00:17:51] Jason: I think one is the risk of sensitive data leakage as employees can input proprietary corporate data into these large language models and the models can retain that data. So that is a no-no and so you actually have seen a bunch of companies ban the GPT-type models because of that. There are even private GPT models that are coming onto the scene. You can think about it as an enterprise only for that specific enterprise versus something that's more in the public domain.
[00:18:25] Jason: Other risks, and I think this may be the biggest one near term that I think could sort of potentially stunt some of the development here, which is the legality of some of these AI models when they're training on copyrighted content, scraped from the internet. That could be text from newspapers, proprietary images, or software code that has some copyright conditions. We've already seen a bunch of lawsuits here. I think the courts are going to need to decide this in the coming months. And then finally, the risk of misinformation and disinformation is incredibly worrisome just given the political climate that we live in right now. It's going to be increasingly difficult to know what is true when you go online.
[00:19:09] Chris: Sure. Yeah. I think we saw that already with a photo that was posted to social media of an explosion at the Pentagon, which for a moment tanked the stock market until it was discovered it was not real. So that's the kind of stuff that I think scares a lot of people. Ralph, did you have anything to add to that?
[00:19:29] Ralph: Yeah, I guess my broader concern would be just the pace of innovation as we sit here today has largely been developed by humans, and as you move forward, do we lose the human guardrails or sort of judgment? One scenario is if a nefarious actor or a non-nefarious actor was to put some of these foundational models together and then they continued to scale and compound without human involvement or interference. That I think is concerning. Hopefully, there are some guardrails put into these models to perhaps alleviate some of that risk, but I think it's still important to have humans advancing the pace of innovation, at least in the background.
[00:20:09] Arjun: I'll say the Pentagon example is really interesting, right? Because I think one of the things from a risk perspective that we need to think about is the other side of it as well, which is what are the risk mitigants? And I think what you'll see emerge, or what is likely to emerge over time is some sort of an industry or technology or tool that can actually start to decipher what content is generated by AI versus what content is generated by humans. So, I think when we're thinking of risk, we should think about the other side of the spectrum as well. I think that will become a very important use case, especially on social media because that can be such a breeding ground for fake content. That would be a concern that would have social implications, political implications, you name it. So, I don't think the industry is going to necessarily stand still and let these risks run wild. I think there will be guardrails put in place and I think there will be another industry that develops that handles that.
[00:21:14] Chris: Got it. All right, so before we end this, I would love for each one of you to give one thing that you are most excited about with generative AI. So, with that, I'll start with Ralph. List one particular thing you are extraordinarily excited about that is generative AI-driven.
[00:21:35] Ralph: I think on the content side, today, why do we have one size fits all for a piece of content? Why isn't there something specific to Chris, Arjun, or Jason? That's more on the consumer side. On the enterprise side, I just think of the efficiencies that we can't even comprehend today. Are companies going to be a lot smaller and a lot more profitable? Do you need fewer employees? And then the knowledge workers are sort of advancing. So to me, that gets me really excited. I would be lying to you if I do exactly what that looks like in a few years, but that sort of scenario is pretty interesting, particularly for what we do as analysts.
[00:22:11] Chris: Jason over to you.
[00:22:13] Jason: Yeah, I'm going to build on Ralph's comment, which is just the power now to create a lot more software than we could before in a lot faster time. I think the kind of velocity of kind of invention and innovativeness from software developers really go way up here because they're just not spending 70% of their time dealing with documentation and all this, mundane stuff, and they're dealing with actually creating ideas.
[00:22:46] Arjun: Yeah, I agree with what Jason and Ralph said, those are certainly exciting areas. I'm going to take it in a different direction. One of the things I'm more excited about, just from a personal aspect, is personal assistants that are generative AI-based. We've had these voice-based assistants for a while now, like Siri or Alexa or something like that. But they haven't really worked on a consumer level. So, I'm ready to see those personal assistants take the next step and if they're generative AI-powered or powered by large language models I think the quality will really improve and it'll be interesting. It'll be something like an Iron Man, a Jarvis-type system, which would be pretty cool.
[00:23:33] Chris: All right, so with that Jason, Arjun and Ralph, I can't thank you enough for taking the time to join.
[00:23:38] Jason: Awesome.
[00:23:39] Ralph: Thank you.
[00:23:40] Arjun: Awesome. Thanks.
[00:23:42] Chris: For more, head to Williamblair.com/thinking where you can browse our library of white papers, market updates, webinars, and all these other resources designed to provide actionable intelligence for emerging opportunities.
[00:23:54] Chris: If you like what you heard, share and subscribe wherever you get your podcast.
[00:24:01] Chris: Copyright 2023 William Blair & Company. L.L.C. “William Blair” and R Docs are registered trademarks of William Blair & Company, L.L.C. As used on this podcast, William Blair refers to William Blair & Company LLC, William Blair Investment Management, LLC and affiliates.
For more information about William Blair, go to www.williamblair.com.
This content is for informational and educational purposes only and not intended as investment advice or recommendation to buy or sell any security. Investment advice and recommendations can be provided only after careful consideration of an investor’s objectives, guidelines and restrictions.
The views and opinions expressed are those of the speakers, and are subject to change over time as market and other factors evolve.