Demis Hassabis at the 2024 Nobel Prize press conference at Royal Swedish Academy of Sciences
Interview

Inside Google DeepMind’s AI Strategy: An Interview With CEO Demis Hassabis

24 minute read
Alex Kantrowitz avatar
By
SAVED
Demis Hassabis discusses AGI, meaning in the AI era and why he believes information underpins the universe.

AI is evolving fast, but AI researchers still have substantive work ahead of them. Figuring out how to get AI to learn continuously, for instance, is a problem that “has not been cracked yet,” Google DeepMind CEO Demis Hassabis told me earlier this month. Tackling that problem, along with building better memory and finding more efficient use of the context window, should keep Hassabis and his team busy for a while.

In a live Big Technology Podcast recording at Davos, Hassabis spoke with me about the frontier of AI research, when it’s time to declare artificial general intelligence (AGI), Google’s product plans — ranging from smart glasses to AI coding tools — and plenty more. I always find Hassabis’s perspective to be a good indicator of where the AI field is headed, and today I’m publishing our conversation in full.

You can read the full Q&A below, edited lightly for length and clarity, or listen to our discussion on Apple PodcastsSpotifyYouTube or your podcast app of choice.

Table of Contents

A Plateau in AI Progress? 

Alex Kantrowitz: A year ago, there were questions about whether AI progress was starting to tail off. Those questions seem to have been settled for now. What specifically has helped the AI industry get past these concerns?

Demis Hassabis: For us internally, we were never questioning that. Just to be clear, I think we’ve always been seeing great improvements. So we were a bit puzzled by why there was this question in the air.

Some of it was people worried about data running out. And there is some truth in that — has all the data had been used? Can we create synthetic data that’s going to be useful to learn from? But actually, it turns out you can wring more juice out of the existing architectures and data. So there’s plenty of room. And we’re still seeing that in both the pre-training, the post-training and the thinking paradigms and also the way that they all fit together.

So I think there’s still plenty of headroom there, just with the techniques we already know about and tweaking and innovating on top of that.

Alex: A skeptic would say there have been a lot of tricks put on top of LLMs. There’s ‘scaffolding’ and ‘orchestration.’ An AI can use a tool to search the web, but it won’t remember what it learns. Is that just a limitation of the large language model paradigm?

Demis: I’m definitely a subscriber to the idea that maybe we need one or two more big breakthroughs before we’ll get to AGI. And I think they’re along the lines of things like continual learning, better memory, longer context windows — or perhaps more efficient context windows would be the right way to say it — so, don’t store everything, just store the important things. That would be a lot more efficient. That’s what the brain does. And better long-term reasoning and planning.

Now it remains to be seen whether just scaling up existing ideas and technologies will be enough to do that, or we need one or two more really big, insightful innovations. And probably, if you were to push me, I would be in the latter camp. But I think no matter what camp you’re in, we’re going to need large foundation models as the key component of the final AGI systems. Of that, I’m sure. So I’m not a subscriber to someone like Yann LeCun, who thinks they’re just some kind of dead end. I think the only debate in my mind is, are they a key component or the only component? So I think it’s between those two options.

This is one advantage we have of having such a deep and rich research bench. We can go after both of those things with maximum force — both scaling up the current paradigms and ideas. And when I say scaling up, that also involves innovation, by the way. Pre-training especially I think we’re very strong on. And then really new blue sky ideas for new architectures and things — the kinds of things we’ve invented over the last 10 years as Google and DeepMind, including transformers.

Alex: Can an AI model with a lot of hard-coded stuff ever be considered AGI?

Demis: No — well, it depends what you mean by a lot. I’m very interested in hybrid systems, is what I would call them. Or neuro-symbolic, sometimes people call them. AlphaFold, AlphaGo are examples of that. So some of our most important work combines neural networks and deep learning with things like Monte Carlo Tree Search. So I think that could be possible.

And there’s some very interesting work we’re doing, building the LLMs with things like evolutionary methods, AlphaEvolve, to actually go and discover new knowledge. You may need something beyond what the existing methods do.

But I think learning is a critical part of AGI. It’s actually almost a defining feature. When we say general, we mean general learning. Can it learn new knowledge, and can it learn across any domain? That’s the general part. So for me, learning is synonymous with intelligence, and always has been.

Alex: If learning is synonymous with intelligence, these models still don’t have the ability to continually learn. They have goldfish brain. They can search the internet, figure things out, but the underlying model doesn’t change. How can the continual learning problem be solved?

Demis: I can give you some clues. We are working very hard on it. We’ve done some work — I think the best work on this in the past — with things like AlphaZero. The learn-from-scratch versions of AlphaGo. AlphaGo Zero also learned on top of the knowledge it already had. So we’ve done it in much narrower domains. Games are obviously a lot easier than the messy real world, so it remains to be seen if those kinds of techniques will really scale and generalize to the real world and actual real-world problems. But at least the methods we know can do some pretty impressive things.

And so now the question is, can we blend that, at least in my mind, with these big foundation models? And so of course, the foundation models are learning during training, but we would love them to learn out in the wild, including things like personalization. I think that’s going to happen, and I feel like that’s a critical part of building a great assistant — that it understands you and it works for you.

And we’ve released our first versions of that just last week. Personal Intelligence is the first baby steps towards that. But I think to have it, you want to do it more than just having your data in the context window. You want to have something a bit deeper than that, which is, as you say, actually changes the model over time. That’s what ideally you would have. And that technique has not been cracked yet.

Related Article: I Spoke With Sam Altman: What OpenAI’s Future Actually Looks Like

Bypassing AGI to Superintelligence 

Alex: Sam Altman, toward the end of last year, told me that AGI is under-defined. And what he wishes everybody could agree to was that we’ve sort of whooshed by AGI and we move towards superintelligence. Do you agree?

Demis: I’m sure he does wish that, but absolutely not. I don’t think AGI should be turned into a marketing term for commercial gain. I think there has always been a scientific definition of that.

My definition is a system that can exhibit all the cognitive capabilities humans can, and I mean all. So that means the highest levels of human creativity that we always celebrate, the scientists and the artists that we admire. So it means not just solving a math equation or a conjecture, but coming up with a breakthrough conjecture — that’s much harder. Not solving something in physics or some bit of chemistry, some problem, even like AlphaFold’s protein folding. But actually coming up with a new theory of physics, something like Einstein did with general relativity.

Can a system come up with that? Because of course, we can do that. The smartest humans with our human brain architectures have been able to do that in history.

And the same on the art side — not just create a pastiche of what’s known, but actually be Picasso or Mozart and create a completely new genre of art that we’d never seen before. And today’s systems, in my opinion, are nowhere near that. Doesn’t matter how many Erdős problems you solve, which — I mean, it’s good that we’re doing those things, but I think it’s far, far from what a true invention, or someone like Ramanujan would have been able to do.

And you need to have a system that can potentially do that across all these domains. And then on top of that, I’d add in physical intelligence. Because of course, we can play sports and control our bodies to amazing levels — the elite sports people that are walking around here today in Davos. And we’re still way off of that on robotics as another example.

So I think an AGI system would have to be able to do all of those things to really fulfill the original goal of the AI field. And I think we’re five to 10 years away from that.

Alex: I think the argument would be that if something can do all those things, that would be considered superintelligence.

Demis: Of course not, because the individual humans could — we can come up with new theories. Einstein did, Feynman did, all the greats that were my scientific heroes — they were able to do that. It’s rare, but it’s possible with the human brain architecture.

So superintelligence is another concept that’s worth talking about, but that would be things that can really go beyond what human intelligence can do. We can’t think in 14 dimensions or plug in weather satellites into our brains — not yet, anyway. And so those are truly beyond human or superhuman, and that’s a whole other debate to have. But once we get to AGI.

Alex: You were asked on the Google DeepMind podcast — which is a great listen — if you have a system today that is close to AGI. I thought it might be Gemini 3. You named Nano Banana. The image generator. What?

Demis: Sometimes you have to have these fun names…

Alex: How is the image generator close to AGI?

Demis: Look, let’s take image generators. But also, let’s talk about our video generator, Veo, which is the state of the art in video generation. I think that’s even more interesting.

From an AGI perspective, you can think of a video model that can generate you 10 seconds, 20 seconds of a realistic scene — it’s sort of a model of the physical world. Intuitive physics, we’d sometimes call it in physics land. And it’s sort of intuitively understood how liquids and objects behave in the world. And obviously one way to exhibit understanding is to be able to generate it, at least to the human eye, being accurate enough to be satisfying to the human eye. Obviously, it’s not completely accurate from a physics point of view, and we’re going to improve that, but it’s steps towards having this idea of a world model — a system that can understand the world and the mechanics and the causality of the world.

Learning Opportunities

And then, of course, that would be, I think, essential for AGI because that would allow these systems to plan long-term in the real world over perhaps very long-time horizons, which, of course, we as humans can do. I’ll spend four years getting a degree so that I have more qualifications, so that in 10 years, I’ll have a better job. These are very long-term plans that we all do quite effortlessly. And at the moment, these systems still don’t know how to do. We can do short-term plans over one timescale, but I think you need these kind of world models.

And I think if you imagine robotics, that’s exactly what you want for robotics — robots planning in the real world, being able to imagine many trajectories from the current situation they’re in in order to complete some task. That’s exactly what you’d want.

And then finally, from our point of view, this is why we worked with Gemini as being multimodal from the beginning: Able to deal with video, image and eventually converge that all into one model. That’s our plan. It will be very useful for a universal assistant as well.

The Smart Wearables Era 

Alex: I watched the documentary The Thinking Game. Throughout the documentary, yourself and some colleagues kept pointing your phone at things and asking an assistant what was going on, and I was yelling at the computer, as I usually do, and said, “This guy needs glasses!” He needs smart glasses to be able to do it. The phone is the wrong form factor. What is your vision for AI glasses, and when is the rollout happening?

Demis: I think you’re exactly right. And that was our conclusion. It’s very obvious when you dog food these things internally that, as you saw from the film, we were holding up our phones to get it to tell us about the real world. And it’s amazing that it works. But it’s clearly not the right form factor for a lot of things you want to do — cooking, or roaming around the city and asking for directions or recommendations, or even helping the partially sighted. There’s a huge use case there to help with those types of situations.

And for that, I think you need something that’s hands-free. And the obvious thing is, for those of us anyway that wear glasses like me, is to put it on glasses, but there may well be other devices too. I’m not sure that glasses is the final form factor, but it’s definitely, it’s obviously a clear next form factor.

And of course, at Google and Alphabet, we have a long history with glasses, and maybe we were a bit too early in the past. But I think my analysis of it and talking to the people working on that project — a couple of things: the form factor was a bit too chunky and clunky, and the battery life and these kinds of things, which are now more or less solved. But I think the thing it was missing was a killer app.

And I think the killer app is a universal digital assistant that’s with you, helping you in your everyday life, and available to you on any surface — on your computer, on your browser, on your phone, but also on devices like glasses when you’re walking around the city. And I think it needs to be seamless, and knows each of those contexts and understands each of those contexts around you.

And I think we’re close now, especially with Gemini 3. I feel we finally got AI that is maybe powerful enough to make that a reality. And it’s one of the most exciting projects we’re working on, I would say. And it’s one of the things I’m personally working on — making smart glasses really work. And we hope to — we’ve done some great partnerships with Warby Parker and Gentle Monster and Samsung to build these next-generation glasses, and you should start seeing that maybe by the summer.

Alex: Warby Parker did have a filing that said that these glasses are coming out pretty soon…

Demis: And the prototype design — it depends how quickly that advances — but I think it’s going to happen very soon. And I think it will be a category, a new category-defining technology.

Alex: Given your personal involvement, is it safe to say that this is a pretty important initiative?

Demis: I like spending my own time on important things, but I like to be at the most cutting-edge thing. And that’s often the hardest thing—picking interim goals and giving confidence to the team, and also just understanding if the timing is right.

And over the years I’ve been doing this, many decades now, I’ve gotten quite good at doing that. So I try to be at the most cutting-edge parts. I feel I can make the most difference there. So things like glasses, robotics — I’m spending time on, and world models.

Related Article: Inside Altman’s Lunch With News Leaders — and His Big Bet on Enterprise AI

The Future Approach to Advertising 

Alex: There’s been some news that Gemini might include ads. There’s been some news that some of your competitors might include ads. The funniest thing I saw about that on social media was someone who said, these people are nowhere close to AGI if the business model is advertising.

Demis: Well, it’s interesting. I think those are tells. I think actions speak louder than words. Going back to the original conversation we were having with Sam and others claiming AGI is around the corner — why would you bother with ads then? So that is, I think, a reasonable question to ask.

From our point of view, we have no plans at the moment to do ads, if you’re talking about the Gemini app specifically. I think we are going to obviously watch very carefully the outcome of what ChatGPT is saying they’re going to do. I think it has to be handled very carefully.

Because the dichotomy I see is that if you want an assistant that works for you, what is the most important thing? Trust. So trust and security and privacy, because you want to potentially share your life with that assistant. Then you want to be confident that it’s working on your behalf and with your best interests. And so you’ve got to be careful that the advertising model doesn’t bleed into that and confuse the user as to what this assistant is recommending. And I think that’s going to be an interesting challenge in that space.

Alex: That’s what not to do. And Google CEO Sundar Pichai, in a recent earnings call, said there are some ideas within Google of the right way to approach this. How do you approach advertising?

Demis: We’re still brainstorming that. But I think there are also very interesting ways when, if you think about glasses and devices, there are other revenue models out there. So it’s going to be interesting to see. I don’t think we’ve made any strong conclusions on that, but it’s an area that needs very careful thought.

Alex: I read before we met that Google told advertisers it plans to bring ads to Gemini in 2026. No?

Demis: We have no current plans. That’s all I can say.

Google vs Its Competitors 

Alex: Let’s keep going through some of your competitors. Anthropic’s Claude Code and Claude Cowork have caused a tremendous amount of buzz. What do you think about them? And do you plan to have an answer?

Demis: It’s very exciting. And I think kudos to Anthropic. I think they built a very good model there with Claude Code.

We’re very happy with the current coding capabilities of Gemini 3. It’s very good at certain things, like front-end work. I’ve been using it over Christmas to prototype games. So it’s amazing. It’s getting me back into programming. I love the whole vibe coding wave that’s happening. I think it will open up the whole productivity space to designers, creatives, artists that maybe would have had to work with teams or had access to teams of programmers. Now they can probably do a lot more just on their own. I think that’s going to be amazing once that’s out in the world in a more general way — to create lots of new creative opportunities.

We’re very happy with our work on code. We’ve got more to do there. We’ve just released Antimatter, our own IDE, which is very, very popular. We can’t actually serve all the demand that we’re seeing there. And we’re pushing very hard on coding and tool use performance of Gemini, but it’s one thing that I think Anthropic have fully focused on. They don’t make image models, multimodal models, world models. They just do coding and language models, and they’re very, very good at that. And we’re pleased to be partnering on that on the one hand, and also it gives us something to push for, to improve with our own models.

Alex: Let’s talk broadly about the AI business. I have a theory for how this could all fall apart, and I want to run it by you. So it’s a three-step process. The first is that large language model training runs produce limited returns. The second is that there are flash models like Gemini Flash that run AI computing as cheap as search. And then step three is that the massive infrastructure commitments become useless given those two factors, and there is a cascading collapse that happens. Is that a legitimate worry?

Demis: I think it’s a plausible, possible scenario. I don’t think it’s the likely one, in my opinion.

In my mind, there’s no doubt AI has already proved out enough, I would say, and our work — I think in things like science and AlphaFold and drug discovery — that it’s here to stay. It’s not like tomorrow, “Oh, we found that AI doesn’t work.” We’ve blasted way past that. So I think it’s clearly going to be the most transformative technology in human history. There’s maybe a question mark about timelines. Is it two years or five years? Either way, it’s very soon for something this transformative.

And I think we’re still in the nascent era of actually figuring out how to make use of it and deploy it, because the technology is improving so fast. I think there’s a huge capability overhang, actually, of what even today’s models can do, that maybe even us building those things don’t fully know. So I think there’s just a vast amount of product opportunities that we see.

And I think we’re, as Google, only just starting to scratch the surface now of actually natively plugging these things into our amazing existing products, let alone building the new ones. AI inbox, we’ve just started trialing. I mean, who wants to do email admin? Wouldn’t we all love that to just go away? That’s my number one pain point for my working day. And there’s so many examples like that, just waiting to be addressed. I think agents in browsers, helping out with YouTube. Obviously we’re now powering search with it. So I think there’s enormous opportunities.

And if you’re talking about the AI bubble — if that’s the question, I’m very happy to answer it — because I think, look, in my view is it’s not binary. Are we in a bubble? Not in a bubble? I think parts of the AI industry probably are, and other parts, I think it remains to be seen.

So I think some of the things — when you see seed rounds of tens of billions of dollars of companies that basically have no product or research, it’s just some people coming together — that seems a bit unsustainable to me in a normal market, a bit frothy. On the other hand, businesses like us, we have massive underlying businesses and products that — it’s very obvious how AI would increase the efficiency or the productivity of using those products. And then it remains to be seen how popular the monetization of these new AI-native products, like chatbots, glasses, all of these things — we’ll have to see. I think there will be enormous markets, but they’re yet to be proven out.

But from my perspective, running Google DeepMind, my job is to make sure that whatever happens — if a bubble bursts or if there isn’t one and it continues — we win either way. And I think we’re incredibly well positioned as Alphabet in either case — doubling down on existing businesses in the one case, or being at the forefront in the bull case.

Alex: Going back to The Thinking Game, I started to feel bad for the opponents of your technology. Lee Sedol, demoralized. Mana, who played StarCraft, beat your bot, but realized that basically it’s over for humans versus machines. Now we’re all up against this in some way as this stuff makes its way into knowledge work.

Demis: I thought you meant our AI competitors. Them I’m okay with. I don’t feel sad about that.

Alex: Yeah, you made me feel bad for the gamers. And the same models that performed admirably against the world’s best StarCraft and Go players are now starting to do our work. Are we going to end up in the same position?

Demis: Well, look, let me — given you brought up games as an example, let’s look at what’s happened in games. So chess — we’ve had chess computers that are better since I was a teenager than Garry Kasparov in the ‘90s. They weren’t general AI systems, but they were — Deep Blue. Chess is more popular than ever. No one’s interested in seeing computers playing computers. We’re interested in Magnus Carlsen playing the other top chess players in the world.

Interestingly, in Go, the best Go player in the world is a South Korean, and he was about 15, I think, when the AlphaGo match happened. He’s in his mid-20s now, and he’s by far the strongest player there’s ever been by the Elo ratings, because he’s learned natively. He’s the first generation you could say that’s learned with AlphaGo knowledge in the knowledge pool. And he may actually be stronger than AlphaGo was back then.

And we all still enjoy StarCraft and all the other computer games. We enjoy human endeavor. I think it’s a bit similar to, like, we still love the 100-meter Olympic race, even though we have vehicles that can go way faster than Usain Bolt. But that’s a different thing. And so I think we have infinite capacity to adapt and evolve with our technologies.

Because why is that? Because we are general intelligences. That’s the thing about it — we are AGI systems. Obviously, we’re not artificial. We’re general systems. And we are capable of inventing science, and we’re tool-making animals. That’s what separates us humans from the other animals — we’re able to make tools. All of modern civilization, including computers. And of course, AI being the ultimate expression of computers — that all has come from our human minds, which were evolved for a hunter-gathering lifestyle. So it’s kind of amazing we were able to — and it shows how general we are — that we’re able to get to the modern civilization we see around us today, and we’re talking about things like AI and science and physics and all these things. And I think we’ll adapt again.

But there is an important question, actually, beyond the economics one about jobs and those things — it’s purpose and meaning. Because we all get a lot of our purpose and meaning from the jobs we do. I certainly do from the science I do. So what happens when a lot of that is automated? I think that’s why I’ve been calling for — I think we need new great philosophers, actually. And it will be a change to the human condition. But I don’t think it necessarily has to be worse. I think it’s like the Industrial Revolution, maybe 10x of that, but we’ll have to adapt again. And I think we’ll find new meaning in things, and we do a lot of things already today that are not just for economic gain — art, extreme sports, polar exploration, many of these things.

Information as the Answer to the Universe

Alex: In a recent interview, you said that you have a theory that information is the most fundamental unit of the universe, not energy, not matter — information. How is that possible?

Demis: Well look, I think if you look at energy and matter, you can definitely — I think a lot of people sort of think of them as isomorphic with information, but I think information is really the right way to understand the universe.

So if you think of biology and living systems, we’re information systems that are resisting entropy. We’re trying to retain our structure, retain our information, in the face of randomness that’s happening around us. And I think you can look at that in a larger physics scale, so almost not just biology, but things like mountains and planets and asteroids. They’ve all been subject to some kind of selection pressure, not Darwinian evolution, but some kind of external pressure. And the fact that they’ve been stable over a long amount of time means that that information is stable and meaningful. So I think one could view the world in terms of its complexity, information complexity.

And a lot of what we’re doing — the reason I’m thinking about all of that is because of things like AlphaGo and AlphaFold, especially AlphaFold, where we solved all the protein structures that are kind of known to science. And how did we do that? Well, because only a certain number of those in the almost infinite possibilities of protein structures are stable, and those are the ones you’ve got to find. So you’ve got to understand that topology, that information topology, and follow it. And then suddenly these problems that seem to be intractable — because how can you find the needle in the haystack — actually become very tractable if you understand the energy landscape or the information landscape around that.

And that’s how I think eventually we’ll solve most diseases, come up with new drugs, new materials, new superconductors, with the help of AI helping us navigate that information landscape.

Alex: Speaking of health and AI, there’s this moment in the Thinking Game where there’s a discussion in the lab about whether to release the results of AlphaFold. And you sit there adamantly, and you’re like, why are we going through a process? Release it. Release it now. Talk a little bit about the lesson from that.

Demis: We started AlphaFold to crack an unbelievably tough scientific challenge — 50-year grand challenge of protein folding and protein structure prediction. And the reason we worked on that, and the reason we put so much effort into it, is we thought it was a root node problem. If we could solve it and put that out in the world, it could do untold impact on things like human health and understanding of biology.

But we as a team, no matter how talented or hardworking we are, we would only be able to scratch a small, tiny amount of that potential on our own. It’s clear. So in that case, it was obviously the right thing to do to maximize the benefit to the world here — to put it out there to the massive scientific community to build on top of and use AlphaFold.

And it’s been incredibly gratifying to see three million researchers around the world use it in their important research. I think in the future, almost every single drug that’s discovered from now on will probably have used AlphaFold at some point in that process, which is amazing for us. And really, this is what we do all the work for.

Alex: I also read that moment — you tell me if I’m wrong — as something of a metaphor. Small, passionate AI division kind of yelling in a big company, “Get this out. Cut the red tape.”

Demis: Yeah, potentially. But look, we’ve had amazing support from the beginning from Google. And the reason that we joined forces with Google back in 2014 is Google itself is a scientific, research, engineering, technical-led company — always has been — and has that at its core. And that’s why I think we have the scientific method and the scientific approach, that thoughtful approach, that rigorous approach, in everything we do. So of course, they’re going to love something like AlphaFold.

Related Article: The AI Device Wars Just Kicked Off In A Big Way

When Computers Surpass Humans 

Alex: Here’s the big question at the end. You built AlphaGo, trained the computer to play Go on human knowledge, and then once it mastered the human level playing, you let it loose with a program called AlphaZero, and it started doing things that you could never even imagine, and making new circuits in ways that surprised you.

Eventually, maybe there will come a time where LLMs, or some version of them, reach a mastery of human knowledge in the same way. What is going to happen when you then let that loose, and it potentially does the same thing as AlphaZero?

Demis: That would be the AGI moment. Then it will discover a new superconductor, room-temperature superconductor, that’s possible in the laws of physics, but we just haven’t found that needle in the haystack. Or a new source of energy, a new way to build optimal batteries.

I think all of those things will become possible, and indeed, not just possible — I think they will happen once we get to a system that’s first of all got to human-level knowledge. And then there’ll be some techniques — maybe it will have to help invent some of those techniques — but kind of like AlphaZero, that will allow it to go beyond into new, uncharted territory.

Alex: That idea of it plugging weather systems into its brain — it’s going to be on that level?

Demis: Exactly. Exciting times.

Alex: All right, Demis. Thanks for coming on the show.

Demis: Thank you. 

About the Author
Alex Kantrowitz

Alex Kantrowitz is a writer, author, journalist and on-air contributor for MSNBC. He has written for a number of publications, including The New Yorker, The New York Times, CMSWire and Wired, among others, where he covers the likes of Amazon, Apple, Facebook, Google, and Microsoft. Kantrowitz is the author of "Always Day One: How the Tech Titans Plan to Stay on Top Forever," and founder of Big Technology. Kantrowitz began his career as a staff writer for BuzzFeed News and later worked as a senior technology reporter for BuzzFeed. Kantrowitz is a graduate of Cornell University, where he earned a Bachelor of Science degree in Industrial and Labor Relations. He currently resides in San Francisco, California. Connect with Alex Kantrowitz:

Main image: Jennifer 8. Lee | Wikimedia Commons
Featured Research