A.I. & Education:
An Interview with Li Jiang of Stanford University

August 12, 2019

Interview by Mike Whipple




Mike: I was hoping to talk a bit about interdisciplinary learning at the high school level, your AIRE Labs, and the subject of robotics. Starting a bit more specifically: if students in high school had a robotics unit, what are some key topics you’d recommend implementing? Sort of like how schools might offer anticipatory pre-calculus…


Dr. Li Jiang: The advancement of AI and robotics will change the way that people leave. People need a basic understanding of how AI and robotics works. This is not just for students interested in engineering and robotics; it is for everyone. So, we are pushing a concept called “AI Thinking”. Everyone should have a basic understanding of how AI works, how the robots with AI work. This is for elementary school students, middle school students, and high school students. And, we don’t need to wait until high school to start teaching about AI and robotics. Frankly, we have found that it’s easier to go teach these elementary school students and middle school students because high school students are usually very busy. Whether you’re talking about students in the U.S.A., in Canada, in Europe, or in Asia, there are usually very busy because they have this pressure to get into a good college. So, they need to do a lot of things in order to prepare. In that situation, they have very little time. Even some people very interested in robotics, they might only focus on robotics when they recognize a big achievement that might help them get into a good college - then, they will put a lot of time into it. But, these people represent a very small portion of the whole population.


What we’re talking about is a general education for almost everyone on AI and robotics. Here’s another thing: when people talk about AI and robotics, people very naturally start to think about these mathematics and these very engineering-focused topics, and you learn to know a lot of things - like linear algebra - which are used no matter what you do. A lot of these AI algorithms are done with linear algebra, which is matrix calculation. And then, people might ask, “How can you teach these to elementary school students or middle school students?”, and I’m like, “No, we don’t teach these mathematics. Every mathematics equation has a meaning associated with it. If you really understand that equation, then you understand the meaning of this equation. You don’t talk about this equation to the elementary school student, but you talk about the meaning behind this equation to them. We have basically shown that if you only talk about the meaning, they all can get it. And actually, for this AI thinking concept, we think it’s for everyone, not just engineering-focused students or computer science students, or electrical engineering or mechanical engineering majors. Because, in the future, this will get into your life, and everything will get associated with the concept of AI, so that you better have some idea.


It’s sort of like common knowledge for the future, except it’s starting now. It’s common knowledge for us to have a better life in this world. It’s kind of like, in the past fifty years you needed to have common knowledge of electricity. If you don’t know electricity, you get into trouble. From now, into the future, we are in the society of intelligence, and you need to have a common knowledge of AI so that you could have a better life.


When we talk about AI thinking, there’s three points in there. First, you need to understand how AI works from a conceptual level, not a mathematical level. Secondly, you need to ability to differentiate yourself from these machines. There’s things that the machine - or, AI - are good at. And then, there’s things that humans are good at. So, you kinda need to know where the boundary is, so that you focus yourself on the things that you are good at. Thirdly, you need the ability to work with robots and AI.


Would you say this links into the concept of the Strong AI vs Weak AI, and avoiding isolating yourself within professions that Weak AI could handle, leaning more towards the “Strong Side”?


That’s right. See, here’s the thing. We really don’t know how to do strong AI, though it depends on how you define “Strong AI”. The scientific definition of Strong AI means that it can do everything a human can do. And now, some people claim they are doing research on strong AI just because their AI can do a few tasks. Not just one, but two. But, two is still Weak AI. You can not claim you are doing Strong AI because you can do two games now, instead of just one. Humans are really good at transferring knowledge from one area to another area. Basically, Andrew Ng says something like, “For things that humans need to think about for one to two seconds, that can be done by AI. If humans need to think for longer than that, normally AI cannot do it. But if a task is a combination of several tasks that require one second of thinking, then we can make robots with AI to do it.”


[Regarding the third point: working with AI] You need to use it as a strong tool, and don’t try to avoid it. A typical example: people who study arts might say, “I don’t really need to know AI since I’m the only who is really creative, and arts are only about creativity, and AI is not gonna beat me on that.” But, AI also have creativity. The thing is, if you understand AI thinking, you’ll see that AI creativity is very different from human creativity. There are two types of creativity. If you define creativity as being able to make something new, AI can also make something new, but it needs to be under the guidance and instruction of human. For example, it could draw a Chinese painting to the level that no one could know the difference. In that sense, it can draw something. The problem is, if you ask AI what it drew, it has no idea. It is basically an algorithm to compete with another algorithm, and to beat this judging algorithm. One algorithm is trying to draw as realistically as possible, and the other is trying to tell whether it’s a human-made or machine-made drawing. They fight with each other many times, and get much better. So in that sense, I keep telling about this - since about three or four years ago. I’m an engineering-trained person, and don’t have painting training, so that even if I have a very nice picture in my head, I have no way to draw it out, because I never had that motor skill. I’m not a painter. But in the future, with the help of AI, I could interact with AI to draw very nice things. For example, I can tell AI, “draw a bunch of trees on the left, maybe there’s some hills, a river on the right, and then some clouds in the sky.” You can describe it, and then AI can come with it and help you draw. I could sketch something on a sketch board, and then AI could help me make it more formal.


Just about two years ago, a friend sent me a picture and said, “look at this bird,” and I said, “yeah, that looks like a picture, and it’s a very nice bird,” and my friend said, “well this is not real, this is generated by my AI, and here’s a bunch of script that I inputted into my system.” So basically, you give the AI a bunch of words - a bunch of sentences - to describe something and it will draw it for you. Basically, you interact with the AI, and the AI can help you. The AI becomes your hand, and dramatically lowers the barrier of you becoming a painter. So then, we’ll have more painters. So, you better know how to use AI. Sure, we’ll have people who are very good with their own hands, but for the vast majority, people will draw with AI.


Do you think it would be worthwhile to implement in arts programs in elementary school or whichever, for students who have difficulty with the necessary motor skills?


I don’t know too much about painting training. So, when you mention kids who have some problems with motor skills, I would assume it’s still better for them to do some motor skills training. But, on the other side, they need to know in their mind that they have this tool. So it’s good to introduce these tools - or this concept - to them, so that they know something’s coming and there’s something new they could use, instead that they have no idea about this.


I’m talking more about a very general education for everyone on AI and robotics rather than teaching a very small group of people who are in the robotics ‘club’.


During your Stanford LASER talk, you compared artificial intelligence with human intelligence (diagram shown below). What do the empty spaces mean, on the AI diagram? Would they just be gaps in understanding?



I was using the circle to represent the area of intelligence. Weak AI are only good at one or a few things. They drill these small holes, and drill them deep. At specific tasks, machines can do really good. Human intelligence is not functioning like that. Human intelligence covers the whole area. General-purpose AI - the strong AI - covers the whole surface. So, the whole thing “goes down”. Humans can go from one area to another area pretty smoothly.


So, to answer the question: some people will go, “Yeah we’re doing Weak AI; however, if we do one weak AI, two Weak AI, three Weak AI… we can do infinite Weak AI. If we sum them together, that’s a Strong AI.” That’s an ideal debate. Infinite numbers of dots could be put together, but my reaction is, “No, that’s not.” Of course, small surfaces consist of infinite numbers of dots. But, inputting an infinite number of dots does not form a circle or a surface. So, my argument is that even if you have an infinite number of tasks, you put these Weak AI together, that’s still not Strong AI. It does not cover the whole surface.


In the wake of some of the comments from Elon Musk et al., I think some people have a general insecurity of being replaced. As a general point of what you’d recommend being common knowledge, would you suggest not worrying about being replaced by AI? So, AI can’t really replace human intelligence?


(22:00) That’s right. I don’t think AI can completely replace humans, but push humans to do more higher-level things. Our education system has trained people to have the capability to do repetitive mental work, starting from basically the revolution of electricity. Electricity got into our society, then we have companies with machines, and we need people to operate these machines. These are intelligent jobs compared with previous machines. They need to know, “If this situation happens, what do I do?” and “If this situation happens, what do I do?” Currently, these are becoming the robots’ jobs. They can be done by programming AI algorithms or smart algorithms. But, previously, no way. Maybe, earlier on, if there’s an issue, the sensors aren’t good enough. If a situation happens, it might require sensing technology to digitize that information. For example, previously, let’s say fifty or seventy years ago, we relied on human sensors. So, you’d use your eyes to look at a situation, and you’d say, “Okay, that’s *this* situation. Situation One: I do A. B, and C.” But now, we have all these advanced sensing devices and algorithms that we can assign “Situation One” and then we can make the machine do A,B, and C automatically. In the past, because we didn’t have these capabilities, so we got good at producing people who did really repetitive mental and physical work.


Now, it’s a different story because we have robots with AI. And these machines - these robots - are really good at doing repetitive labour and mental work. This doesn’t leave much space for humans to do it, because machines don’t sleep. They work 24/7 and don’t really make mistakes, as long as everything is functioning correctly. Humans, even if everything is good, still make mistakes. That’s the human error. In that sense, AI and robots will push people upwards to do jobs that require more intelligence. For example, human creativity in jobs that require consistent changes. You need some kind of skill to handle that kind of change. I don’t believe that AI is going to be the Terminator or something; it’s just going to make our life better. However, during this process, some people will suffer. Every time we have a new tool, some people get replaced. Normally, if these people are not prepared, they will not be the people who take the new jobs created by these tools.


So, that’s why I like to use the example of a computer.


(Referring to Dr. Jiang’s prior LASER talk) from the movie Hidden Figures?


I actually didn’t know about that movie until a colleague of mine at Stanford - she also teaches robotics - I was talking about it with her over lunch, and she was telling me, “Well, you should see that movie. That movie is exactly what you’re talking about.” Because before that, a computer used to refer to a human, and now people use it as a machine, and now nobody really knows that is used to be used to refer to a human. When I saw the movie, I thought, “Wow, this is exactly that transition period, and it’s so good that there’s a movie to talk about it.” The good thing here is that Dorothy, in that movie, noticed it and made some plans to catch up. And they succeed. In that movie, some pilot - I don’t remember the name - wouldn’t launch until confirming with a human calculation. But, that will not happen again. Let’s say from now, a pilot needs to go to the moon again, and some calculations need to be made to ensure the trajectory is right. It will be hard to imagine someone will jump and say, “I have to confirm with a human calculation, and then I’ll go.” So, in this kind of situation, it will become more rare. At that time, they didn’t really trust [machine] computers that much - or, some people - but now, it’s different. Now, we need to have students/kids be prepared for the future. Some groups of people will be replaced and their lives will be trashed, and then the later ones will catch the new jobs. I worry about those groups of people. If we teach them in the right way, they could catch the new jobs created by AI.


 Regarding the job market, would your general recommendation be to avoid hyper-specialization?


It’s hard to just summarize things like that. For example, the common knowledge of electricity is pretty simple. But if we say that the common knowledge of electricity is one dimension, the common knowledge of AI thinking would have maybe three, four, or five dimensions. It has a lot of ways of implementation in different areas. So, I would say that if it is repetitive mental work, then you’d need to rethink the whole thing.


Sometimes when I have a chat with friends, and say, “Hey, what are you doing these days?” and he’s like, “I’m doing this and that,” and I say, “Why are you doing *this* as your job?” and he says, “Well, it’s kinda simple. When I’m doing my job I can do other things.” This is not a good way of thinking. If you’re doing simple things, it means the job is very vulnerable. For highly specialized jobs, from some point of view, I shouldn’t say it’s easy to replace, because some very specialized tasks require capability from all kinds of areas, and a lot of human interaction. It’s more the repetitive work, or work that doesn’t require a lot of thinking, that will be replaced. If it is a very specialized work but needs a lot of thinking, then you will still be very safe. Especially for jobs that need to deal with humans a lot.


Even when it comes to jobs that involve dealing with humans a lot, AI can be used to come extent, maybe. So, if for example, we might imagine using AI in a classroom to perform certain tasks. Would it still be just in a supplementary sense, and not really a threat to those positions as a whole?


When I talk with these K-12 teachers, we can see that they’re sort of divided into two groups. There are people that are really worried that AI will take their jobs, and then there are people that are hugging AI, saying, “Welcome, come,” and that we need to get to know more about AI. So, here’s the thing: teaching is a very complex job, with complex tasks. Let’s say you have a teacher with twenty-four or twenty-five elementary school students. Each student is different. And, as a father of two, with both kids in school, I know the most important stuff kids learn in school isn’t pure knowledge. From just a pure-knowledge-transfer point of view, we could use robots or AI, or robots with AI, to do most of the job. For example, a lot of these schools just play the videos from Khan Academy. They have lots of videos and explain concepts pretty well. Some teaching will go, “Well, I’ll just play this and you guys can watch, and then I can answer questions.” That’s already a signal that there’s something replaceable, because they [video] can do it better. But, in my point of view, good teachers will not get replaced, because they are teaching these kids how to become a human, how to think, how to address a problem, and there’s a ton of other stuff. So, if we only think about school as a place to transfer pure knowledge, then yeah AI can probably do ninety percent of the job, or even one hundred percent. But, I feel like a teacher does way more than that. If asked you, in your life, who was the teacher who gave you the most influence or whatever, and you give me some examples or events, most likely it’s not an example like, “Hey, one day he teaches me this… .” Usually it’s not. The way a teacher teaches you, or a day when you’re sad or encounter difficulties and a teacher helps you - that stays in your memory.


It is important for these kids to get a way to know the more advanced research. I talk with a lot of professors, top scientists in their ‘60s or ‘70s, and then ask them about their education experience, and to tell me the stories from back when they were young, back in K-12. I’ll ask them what impacted their life. From an academic point of view it’s normally some events. For example, they met some famous professor, or they get into a lab at a university and saw something. Basically, they meet someone at an event who they normally don’t meet, and talk with them, and these guys throw some ideas to them; or, they saw something in a research lab that really shocked them, and that totally changed their life. One of the professors told me that by some random opportunity he got into a research lab, and saw some really cool stuff in fifth grade, and that changed his life. He’d never really thought that science could be that interesting, and then the rest of his life is just following that route.


Can you tell me a little bit about AIRE Labs?


So, this is actually what AIRE is doing. Think about this knowledge - the knowledge we teach in K-12, most of it is really old knowledge from human history. It’s several-hundred-year-old knowledge. Even in undergrad. For example: calculus. Calculus is treated as somewhat advanced mathematics, and some high schools teach it in AP classes. And then it is in undergrad classes for engineering students. But that’s already three hundred years old. So, the new stuff happening in labs, within the last twenty years - normally you can only know this in graduate school. If you think about this, it’s not a very healthy way of teaching: that students in K-12 only get taught very old knowledge, and his old knowledge is usually not very inspiring to them. But when they get a chance to see new knowledge in research labs and they get inspired, it can change and transform kids. So, that’s why we feel it’s very necessary to have a channel to have these top experts to somehow teach these K-12 kids.


AIRE is a program that consists of professors from engineering school, and the school of education, and we share the same interests, which is that we feel the current education system is lagged behind our technology advancement. A lot of us have kids at home, and we deal with this education system on a daily basis, and we suffer on that. Especially in the Silicon Valley. If you talk with any entrepreneurs who are in the field of AI and robotics, their fundraising speech will usually begin with, “If you use this technology, you will replace this many people and save this much money.” So, if you talk with these entrepreneurs, you see where the trends of AI are going. But, on the other side, your kids are in the school, and these schools change really, really, really slowly, and the way they’re taught isn’t the way that you feel is proper. The kids don’t get the chance to see this more advanced stuff, and they get bored, but they see this AI stuff in the movies.


Here’s the funny thing: two years ago, I invited a roboticist from Hollywood to come talk in my class. I met this guy in a robotics conference, and he gave a keynote speech in that conference because he designed a lot of really cool robots in movies and really impressed a lot of these researchers. It’s a pity that these guys don’t publish papers - they make cool robots. So I invited him to talk with my graduate-level class, and his talk was really good. But then, I asked him to talk about education, and then he said something very interesting. He was telling the class, “Okay, Li asked me to talk about education. I’m a roboticist in Hollywood. 99% of the AI and robotics education, to the general public, are done by us, the movie guys. What we care about the most is the box office. So, we care to some extent about the knowledge, but, frankly speaking, we care about the box office.” So, in that sense, most people don’t get the right message about robots and AI. They get things from the movies. If we talk with people, and they come up with examples, most examples come from the movies. What’s shown in the movie aren’t really the correct ones. Most people get educated from the movies, which is kinda sad.


Would you say that the A.I. in movies is an exaggerated Strong AI?


Mostly, it is, otherwise people won’t watch it. They have to make it interesting. We push the concept of AI thinking to help people understand what is doable in AI and what is not, and what are the things humans can do better than AI. In a lot of movies, AI are way smarter than humans. They know everything, which makes people lose faith, to the extent of asking, “What’s the point in having humans when AI knows everything?” They don’t, and will probably never be. But on the other side, they could probably be very helpful. Let’s say, we have like six or seven billion people on the Earth. Humans will leave Earth for sure, and we will become an interplanetary species. But, we need to do it. It’s kinda like Columbus, and the brave people sailing to another continent; now, we need to sail to another planet. For humans, as a biological organism, the conditions for us to leave are very strict: from, like, minus-10 degrees Celsius to like 30-35. So, maybe 0 degrees Fahrenheit to 120? [The conversions were a bit off]. If you send humans first, you drastically make the top task more difficult. But, sending robots is the much better solution. In fact, we do that all the time. For example, the Curiosity on Mars. That’s a robot, and it’s still working on Mars. We need to do that. We need the help of a lot of robots with AI to do these jobs. And, I feel that when a society becomes richer and richer, I don’t think the human population is going to go straight up; it’s gonna plateau at some point. We’ll have a limited number of humans, but you can make a whole lot of robots to help us do a lot of jobs. I feel like that’s the future. The robots aren’t really coming here to become Terminator - it’s the role of helper. That’s my view.



When we talk about AI in education, there are a lot of researchers using current AI technology to enhance the teaching experience or the learning experience inside the classroom, at home, to make these kids learn current knowledge faster, better, and easier. I think that’s very meaningful. And then, we are taking this in a different approach. We are feeling that we need to make changes in content that kids are learning in the current stage. So, we’re teaching this several-hundred-year-old knowledge. We use AI to help them learn this knowledge faster. That’s one way. The second way we are thinking is that we need to get the newest knowledge into K-12, so that they know what’s going on, and can choose their major and future route and whatever. We don’t need to wait until graduate school for them to be exposed to this new knowledge. Especially the knowledge about AI and robots that could change their life dramatically. In the history of science, there are technology advancements that will only change a small part of our human society. We call that “innovation” but the scale is small. For example you have a new material, which will help you make a new car tire, which can last forever. Or maybe even thirty years. A car will not last for thirty years, so this is kinda like a permanent tire. That’s cool, and it will impact the car industry, but maybe that’s it. However, AI is a platform-level innovation that’s going to change everything. Like when electricity got introduced to society. It changed everything. And, this is probably the same level, and it’s unavoidable; it will impact everybody’s life, so that we have to know about it. So, for this level of innovation, everyone gets influenced by that, so it’s better to prepare these kids for that.


You think of it a massive paradigm shift, in that we need to make sure everyone keeps up?


It will change everyone’s living style, and jobs. For example, when I talk about these things - my friend who lives in Asia was telling me that he is a lawyer, and he already noticed a trend in that the big accountant firms are shifting from accounting to the lawyer business already. Think about accounting: they deal with numbers on computers, and a lot of these jobs can be done by algorithms. He said, “That trend is so clear, and accounting firms are trying to grab business from these law firms already.” And they feel the lawyers are safer, in terms of how they deal with people more, and every case is different. It’s safer when facing AI. If you talk with people and you see all these different indicators that show it’s happening in different sectors and industries.


You mentioned that you’re currently in China; are you noticing other examples in the United States’ various industries where this is happening? Where companies are latching onto these trends?


Everywhere. I think in the financial sectors it happens a lot. I’ve read that the City Bank released some articles saying that by 2025, 30% of the jobs in the financial industry will be replaced, or something. That’s like 1.3 million people. It’s not really surprising, because in the financial industry, you’re dealing with numbers in the computer system. It’s kinda like the accounting stuff: dealing with numbers in a way that can be predefined, and doing it very frequently and repetitively. These jobs can be replaced. It happens everywhere. For example, some of the magazines - or, these online mediums - they use AI algorithms to write articles. Especially in the financial sectors and sports sectors. They started with these two sectors because, as long as you don’t mess up with the numbers - so, a soccer game is 2-1 and not 1-2 - then you’re fine. And most people will not realize that these articles are not written by humans. They used to need to have ten writers, but now they only need two, basically to do the final adjustments. These AI can do an article in a second. And then we use a human to proofread it and change it and do final editing, and then send out.


So, that’s happening. A lot of these big companies need to generate a lot of content.


Some people have the insecurity that AI might be encroaching “too much too soon.” But, conversely, some people worry that when we consider people’s needs - for example, the needs of the elderly - that AI isn’t advancing as quickly as it should. For example, as our aging population swells, will AI be able to assist?


It depends what you’re talking about. AI already has a big impact on the medical industry with imaging stuff. It has been proven that AI is better than humans in order to scan through these medical images to look for things like signs of cancer. It can do it much master than a human. On the other side, if you’re talking about the elderly, some people have disabilities and can’t really move by themselves, and we can put a robot to help them move. I feel that technology is one thing, but there’s other things, like FDA approval and going through a lot of regulations. When we get into medical, with physical contact with the patient, it takes much longer by nature. No matter what kind of technology it is. I do see a huge potential there, but I still think it will move slowly because of all these regulations, and people need to think about whether financially it makes sense. Things move way faster when people see that advancing the technology can generate money - that there’s a big market for it. So, if you only rely on the universities to do that, it’s going to happen slowly. Universities are usually dealing with these more visionary things, but when you need to see the impact in the society, it’s the companies doing that, because it’s the companies making the product. And, you need to have a big market and a healthy profit margin. So, it’s a complex situation. For example, now there’s about four or five companies making a powered exoskeleton for people with a disability related to walking. Do you know what is a powered exoskeleton?


(Referring to the “Amplified Mobility Platform”) Well, funnily enough, my mind goes to a movie example: the Colonel’s suit in the movie Avatar.


Or, the simplest one is the Iron Man suit, which is a powered exoskeleton. But, they only make these for the lower extremities - basically, for the legs. They wear it, and it extends the moving intention of the patient, and it helps to stand up and to move. It can make these people, who are never really able to walk, walk again. Which is super cool. But, these companies need to go through a lot of regulations to get it approved for use. But, that’s the trade. I think there’s already four or five companies in the U.S. that already passed FDA, so, it’s starting. How long it will take isn’t necessarily just about the technology. It needs to make financial sense, because all these suits aren’t really done by universities, but by companies, and companies need to do all these calculations to see if they’ll make money on this. For example, if Google sees it, and wants tp push on it, then obviously it will happen a lot faster.


Everything has its own pace. When we talk about telerobotic surgery, people always think about one famous company, Intuitive Surgical, who make the da Vinci surgical robots. And actually, in that field, this company does better than the academics, these university research labs. So, that really shows. They also hire some very good people to keep doing research on that product. Telerobotics in surgery is a big topic in robotics, and this company holds a really good position in that. They have lots of patents and whatsoever. So, research in universities versus research in companies: they just move at different speeds.


When it comes to humanity coinciding with AI, what are the biggest takeaway points that kids should learn, philosophically?


When we teach kids about AI, especially how AI views a picture, or a painting, then we don’t need to teach kids about neural network behind it. What we usually teach is that, when you see this picture, and we asked the kids to describe it, they will come up with a lot of different things. For example, if I show a picture of Golden Gate Bridge, and then I say, “What is this picture?” they will say, “Oh, that’s San Francisco, that’s the Golden Gate Bridge, and the clouds,” and blah blah blah, and they’ll tell lots of stories. Humans have this natural ability to think about anything associated with whatever is in this picture. We’ll talk about this for ten or twenty minutes. Then we’ll say, “How would AI see this picture?” and then we’ll load this picture into an AI program which does image recognition. And then things jump out, and it says, “A city: 99%,” and then, “There’s a bridge: 98%,” and then, “Humans: 98%,” and it jumps out a lot of keywords with a confidence of ninety-something percent or sixty percent or whatever. So then, we’ll ask if AI will know that the picture is Golden Gate - probably not, because we didn’t really use enough labelledGolden Gate” images to train it. Imagine this is an alien from another planet, and we showed him this picture, but it’s not really thinking in the human way. It’s not really thinking; it’s been given these words with a probability associated with this. That’s the AI way of thinking, and only that way. Now, researchers are trying to tell stories out of a picture, but AI are still failing at that. Maybe in the future we can make it a little bit better, but still, AI think in a completely different way than human beings. So, after that kind of class, everybody knows, “Okay, AI thinks differently,” because humans have this natural ability of empathy, even though empathy is very hard to do. People tend to think, “I’m thinking this way, so you also think this way.” Kids will think, “Oh, that’s Golden Gate, Ive been there last week…,” then they will automatically assume AI thinks in a similar way, which it does not. We show them AI does not think in the same way that you think. We don’t really teach these high-level philosophical concepts, and we use examples to teach them that AI functions differently.