CLOCKED OUT INTERVIEW
5.13.15
Chungin “Roy” Lee is the CEO of Cluely, an app that serves as a completely undetectable desktop AI assistant that can see your screen and hear your audio, providing the user with real-time suggested responses to interview questions, sales calls, or virtual meetings, Cyrano style. Roy is also 21 years old, suspended from Columbia University, the subject of a recent NYMag article, and, to us, a pioneer in subversive thought. This morning, he—from his silicon valley mansion—and us, from our bushwick three-bedroom, managed to exchange ideas through a technologically-constructed town square in the ether. (Zoom.)
ROY LEE (17 MINUTES LATE): Sorry I’m late, I was getting coffee with my girlfriend.
CLOCKED OUT: You’re good.
ROY LEE: Okay, let’s chat.
CO: So before we start, are you using Cluely right now?
RL: Yes I am.
CO: What is it suggesting that you say to us right now?
RL: Well, when there aren’t any questions, it doesn’t really think of anything. It says “Cluely suggests context based reply options. Real-time answers, suggestions based on conversation cues, and customized suggestions using context and user supplied info.” But that doesn’t really help me right now.
CO: How does it decide what the “right answer” is?
RL: It has the entire conversation transcript, it has all the user input, eventually the model will be attuned to your custom conversation history, and it sort of just intelligently reasons using all the context that you’ve given it. For example, I’ve given it a ton of context about me: I’m, like, the CEO of this company, we do XYZ, and I’ll give it information about what you do. And based on that context it’ll try and interpret the information I need at the moment. But if there’s no question being asked then it won’t give you very relevant answers.
CO: For the purpose of a purely informational interview with no concrete objective like this one, what's the standard of correctness for the software for how you should most appropriately answer?
RL: The only objective function here is only the objective function of whatever the recent model we made is trained on, so I mean it’s not really like trying to be geared towards a goal, it’s just using predictive text tokens in the context around it to sort of mathematically predict what the next best information would be. But I mean this is a question of traditional ML vs. Transformer-based reasoning models.
CO: What’s the difference?
RL: Yeah, when you have ML then you give something an objective function like, collect as many points as you can, and then you train the model to more intelligently collect points. But with transformer-based you’re not really doing that, you're just trying to mathematically predict what is the next most useful token that this user can have rather than trying to predict—trying to optimize for some sort of function. The only thing you’re trying to optimize for is accuracy of the prediction.
CO: Sure, when there is a clear binary right/wrong answer I can see how either of those could be helpful. But how does something like Cluely attempt to procure a “correct answer” in the context of a social interaction?
RL: I guess, to be honest, the technology for the launch video isn’t super there.
[Cluely’s launch video shows Lee on a date with an older woman, wearing Cluely-implanted glasses that help him win her over…until he fumbles the second date because he has to go to an anime-convention with the boys.]
I guess you could prompt it very specifically, like “I’m on a date with this girl, and I need you to give me the best lines to make her fall in love with me.” But a.) this is not something AI is good at, and b.) the actual context would need to be provided by the user. We don’t independently go and look through Instagram and say “Oh, six months ago she was in Hawaii, so bring up Hawaii.” We don’t actually do that yet. Right now this is mainly based on: you ask me a question I don’t know about and I can give you an intelligent answer based on whatever ChatGPT thinks is correct.
CO: So you didn’t use Cluely on your coffee date with your girlfriend?
RL: Unfortunately not, no. I didn’t bring my computer.
CO: That’s good. So the New York Magazine article that featured you has been a pretty big hit, and a lot of people are talking about it. Do you feel like it accurately represents you and your position?
RL: Uhh, I didn’t read the entire article, but essentially it said I cheat on a bunch of things, right?
CO: That was part of it.
RL: Yeah, like if there’s a shorter way, I’ll usually take the shortcut. If AI lets me cheat on an assignment I’m not gonna write the essay to be honest.
CO: So you don’t feel like the things you had to learn in university were really worth your time?
RL: I mean, I thought it was cool when we read the Odyssey together. I read the book because I wanted to read the book, not because I wanted to write about, I don’t know, like, fucking Plato’s use of imagery in section three or whatever, like I don’t care about that.
CO: Okay. So your manifesto says “We want to cheat on everything. Yep, you heard that right. Sales calls. Meetings. Negotiations. If there’s a faster way to win – we’ll take it. We built Cluely so you never have to think alone again. It sees your screen. Hears your audio. Feeds you answers in real time. While others guess — you're already right.”
It then goes on to draw a parallel between Cluely and AI in general and other modes of technological innovation such as the calculator, Google, spellcheck, innovations which took “shortcuts” and were branded as “cheating.” Do you really think that something like Cluely is going to be seen in history’s eye as akin to these inventions?
RL: It’s not just Cluely—
CO: And AI.
RL: I mean with AI in general we see this today: teachers are trying to ban the use of AI but the pretty much global consensus among the youth is that we should just be more accepting of AI, this is not technology that’s going away in the future, so it’s better to prepare for a world where it’s a universal. Yeah, I think it’s not just Cluely that’s gonna be seen this way, it’s literally every use of AI ever.
CO: So what’s the end goal do you think?
RL: The end goal? I mean, the end goal is to make humans much more capable. If everyone immediately—think. If everyone woke up today and it was just a standard, and everyone thought using AI in every single instance where it can be helpful is good, then all of the sudden we don’t have any more data analyst jobs, we don’t have any more meaningless fill-in-the-blank bullshit jobs, humans are a hundred times more productive, scientists can get a hundred times more work done, and within ten years we’ll have cured cancer, cured ageing, and be on the next flight to Mars. AI unlocks—it’s such a gigantic lever of output for humans and the only reason that it’s not the lever that it can be right now is because more people are not willing to use it in every instance possible.
CO: What happens if AI automates the entire workforce, what do people do?
RL: I mean, I bet blacksmiths two hundred years ago were scared for their jobs too, with the Industrial Revolution, but we live in a world without blacksmiths and it didn't kill anybody. There’s just new jobs that are needed in the future when technology enables more things.
CO: Like what?
RL: Like what? Like data analysts. You think a blacksmith two hundred years ago would understand someone’s job being punching numbers into an Excel sheet? They can’t predict the future, nobody predicts the future, but labor’s always gonna be required and human thought is always gonna be required, and maybe we end up in a world five hundred years from now where nobody does any engineering science, and all we do is sit there and paint. And we just paint and read and write books and whatever the fuck. Like, who knows?
CO: But you had the opportunity to paint and read books all the time in college!
RL: Yeah, but that’s not what I wanted to do.
CO: You were more concerned with achieving success.
RL: I was more concerned with the fact that AI is here, I’m trying to do some cool shit, the entire world as we know it is changing, and if anyone’s gonna be involved it’s gonna be me.
CO: I’m interested in the point you were making with the blacksmiths about jobs becoming obsolete because—I mean, you live in Silicon Valley, right?
RL: Yeah, we’re in San Francisco.
CO: So there’s, like, massive class divides between people who are building the AI software and the rest of the people and the people who have the jobs that will eventually be usurped by the AI software. I’m interested in what the steps are in between the total obsolescence of these jobs and then the utopia of everyone reading and painting, like how that’ll look—
RL: Yeah, I actually don’t think the class divide and inequality will get worse at all. I think right now it’s about as bad as it possibly could be. There’s homeless people on the street, and AI hasn’t really taken anybody’s jobs. I don’t even think that AI will take that many jobs in the short to near term, like there’s not one industry that’s been massively disrupted by AI. We’re a far ways away—many, many years away from AI actually even being able to take over any fast food stores. I think if you’re pointing to the inequality today and saying that it’s going to get worse if we keep on growing with technology, I mean, what is the alternative? The alternative world is that things stay the same.
If there is no technology pushing society to change and adapt and evolve then we’re stuck in the same systems we have today, and these systems result in the Tenderloin being a piece of shit, and, like, homeless people and fent addicts everywhere—obviously it’s not working. And I think we should push for change quickly. And I think it’s a very negative viewpoint to think that because things will change, or because of some arbitrary notion that maybe some jobs will be replaced, that this is going to result in more homeless people and more crack addicts. I don’t know, maybe AI next year develops some technology that makes addiction go away tomorrow. And there’s no more crack addicts.
CO: The future with AI is very open ended—the same way you said blacksmiths couldn’t possibly imagine what the job of a data analyst would be like, we have no comprehension of what a superhuman intelligence could be like. What I’m talking about right now is Artificial General Intelligence, which concerns me. Are you concerned at all about an alignment problem?
RL: I generally am very optimistic about technology. If you’re proposing that we could put superhuman intelligence in a box, we can just ask it, “Hey, how can I cure cancer? How can I cure Alzheimer's? How do I send people to a different universe on a different inhabitable planet thousands of lightyears away?” and it could tell me how to do that, I think that’d be really fucking cool.
CO: But what if it decides that it doesn’t want to help you? What if it decides that it doesn’t want you to be its master anymore? It’s smarter than you.
RL: Then I’ll be the servant of the AI.
CO: You want that?
RL: This is such a different world that you’re proposing. You guys cannot possibly imagine what it looks like when there is literally God inside a box.
Maybe you would want to be its servant.
CO: That’s an interesting point.
RL: The only thing I’m proposing right now is that the world that we live in is really bad for a lot of people. There’s starvation, mass homelessness, and America, despite being a first world country, has the highest inequality rates ever. This is not a happy society. This is not a utopia. And the way to get to utopia is not static change and resistance against the things that are disrupting the status quo. The way to utopia is embracing it fully and having everyone have full understanding about it so maybe the marketplace of opinions will decide that “hey, this is the safest route,” but if no one’s contributing and everyone is trying to stop change then we’ll just live in the same world full of crackheads for the next 20 years, and I don’t think anyone wants that.
CO: Is there a reason to rush into it, though? Wouldn’t it make sense to take time to make sure we’re not creating something that will existentially threaten human existence?
RL: Yeah, the reason to rush into it is that, as you guys very eloquently put it, there will be a growing gap between people who know and leverage AI and people who don’t. This world of “Oh, let’s just take it slow,” will have twenty percent of early adopters feel like they’re literally cheating in every aspect of life. They’re using it on all their homework, they’re passing all their assignments, they’re doing all this extra shit that AI allows them to do, while eighty percent of society lags behind because there are voices saying “Hey, don’t use this, this is unfair, this is cheating.” What if we lived in a world where everyone used AI in every instance possible? Like, use AI to apply for jobs, use AI to find a girlfriend, use AI to find your friends, find events near you—if everybody used AI everywhere, there are no cheaters, and there’s less of this inequality that you guys are worried about.
CO: So it’s more of a great homogenization—
RL: Exactly. And that’s the point of the manifesto. Everyone is just accelerated.
CO: Do you feel like you’re missing out on some parts of life if you’ve outsourced all of your decision making?
RL: The technology is not there yet. I can’t outsource all of my decision making yet.
CO: Do you feel like technology is making us smarter? If you cheat on all of your assignments, use ChatGPT to write your essay—I don’t think that ChatGPT is actually better at writing essays than most people. I think it’s bad.
RL: Great, then use ChatGPT to help you ideate and write better with the voice that you know writes better.
CO: But isn’t it better to come up with ideas yourself? Aren’t you cheating yourself?
RL: I mean, when the calculator came out people were saying “Yo, isn’t it better to do all the derivatives by hand manually and do all the manual calculations? Your brain’s gonna get rusted from offshoring all these calculations.” But is that what calculators did? Or did calculators enable us to get rid of the manual work of doing calculations and allow for new theories and methods and modalities of mathematics that required millions of calculations that could never have been done by hand? I mean the whole idea of back propagation in machine learning which is the core thing behind all of this, would not have been possible in a world where people rejected calculators and thought that you need to do calculations by hand because it’s better for your brain.
I mean, maybe writing looks entirely different in the future, maybe once we get past this idea of “Let’s try and come up with our own ideas and write every word and tactical grammer bullshit by hand,” maybe once we get past that, maybe, then writing as we know it changes.
CO: But what you’re suggesting we ask ChatGPT for isn’t calculations, it’s ideas, and isn’t that about the indomitable human spirit? When you’re asking for calculations, there are correct answers, but there may not be correct answers in the pluralistic world of ideas.
RL: The final frontier of humanity, and the final distinguishing factor of our humanity and AI is our ability to come up with new, novel, interesting, funny, romantic things. I mean, The Lord of the Rings probably couldn’t have been generated by AI. But suppose I’m a writer in the future and I have my own preferences and my own tastes, and I vaguely have this understanding: I want to develop this world with elves and hobbits and all this bullshit. I use AI to generate maybe twenty fucking different copies of this story I’m trying to work towards. I pick the one that I like. I’m the one that decides I like it because it’s my tastes and it’s my preferences that told me it’s the best one, and I like this one so I’ll generate twenty more versions of chapter two, twenty more variations of chapter three, and so forth.
CO: But how soon until you outsource your tastes to AI? And it decides which of the twenty versions are the best?
RL: You guys are asking me “When does AI overtake humanity in the thing that makes us human?”
CO: Yes.
RL: Which is our tastes and preferences.
CO: Ehh…
RL: And I’m saying, imagine the technology is here where GPT and AI can ideate and be more creative than humans. Like, there’s no human in the world that can be more creative than AI. Imagine this world exists. Because this technology isn’t there, and we are far away from this technology being there. But imagine this world exists.
CO: Do you believe in the soul?
RL: Yeah. This is exactly what I’m talking about. Like, the soul—there’s no matrix multiplication that can get an AI to have the soul that a human has.
CO: Oh, so what you’re saying is in a world where we are all outsourcing everything to AI and all the intermediary steps are completely eliminated, and we have, as you said, this goal of homogenization, when you go to decide who you’re in love with and who your partner is, all the useless fat of the interaction is cut down and all that is left is the very crux of the person?
RL: Yes. Exactly, exactly. That is what this world looks like. This is what happens right now already, bro. We have Tinder algorithmically matching us with our hundred best preferences and we already do this. It’s just not done at the scale that it will be in the future. And I think everyone agrees, or a lot of people will agree, that right now, in the social anxiety riddled world we have, Tinder and Hinge are a net positive for society.
CO: It’s interesting to hear you say that most people agree that Hinge is a net positive, because I feel like most people who we interact with would disagree.
RL: In this social anxiety riddled world that we live in today, it’s not possible to say “Hey, just go out and meet people organically,” because people don’t go out and meet people organically. The rates of social anxiety are much higher than normal. I think the ability to algorithmically match yourself with one hundred of your closest possible—like your looks-match, and your personality-match—this is positive, and without it nobody would go out and nobody would have sex.
CO: It depends on the world that you’re living in, I guess. Maybe in Silicon Valley that’s true but in Brooklyn, New York things are kind of different.
RL: You guys can romanticize the beauty of romantic connection and bullshit but Hinge got popular for a reason and it was because people used it.
CO: It’s kind of a disheartening thing that we’ve mechanized these organic processes.
RL: Just imagine that there’s an AI that knows everything about you, it knows the last twenty years of everything you’ve experienced in your life, all your taste preferences, exactly what kinds of guys or girls you like, and it matches you with the single most person in the world that—
CO: Your soulmate.
RL: Your soulmate. And AI can do this for you. Are you not curious to see who your soulmate is?
CO: I’m curious. I’d definitely meet them.
RL: This is positive for society. This tells you that you will live the happiest life with this person, and this is good.
FIONA: But I don’t know that I believe that it could do that. I guess if I put my skepticism aside, maybe.
ZOE: Is there nothing immeasurable about me in the way I present IRL that my soulmate might find charming? It’s all about the taste preferences of the inner soul?
FIONA: And it feels like it takes something away from you—that you don’t go on this journey yourself, you’re put on this conveyor belt towards your life and you’re just shuttled around to your perfect job and your soulmate based on your IQ-level and racial preferences and whatever.
ZOE: Wasn’t life about the friends we made along the way?
RL: Was there a question?
CO: Are you involved at all in politics?
RL: HELL no.
CO: You don’t have any political—you don’t follow politics?
RL: I believe in pro-choice and that’s about it.
CO: Wow, okay. Interesting. That’s your one stance?
RL: I do not care about any other political issue, but I don’t want to be a father yet.
CO: Oh okay, so you care about pro-choice because—for you.
RL: Yeah.
CO: So how does Cluely fit into the intermediary steps in between achieving an AI utopia and now?
RL: So I think the worst case scenario is we have an elongated twenty year period of time where some people are using AI maximally and some people are not. This results in the gigantic inequality gap that you guys are describing. I think the goal for Cluely is to be a very useful assistant that is distributed en-masse because of my ability to go viral. Everybody tries Cluely and realizes that, “Hey, even though there’s this Trojan Horse idea of using it on my homework, turns out it is actually useful for everything,” and more and more people adopt this mentality of “I’m gonna use AI everywhere, I’m gonna use Cluely everywhere because Cluely is applicable for every single technology on my computer.” And the more people get used to this idea, at some point we hit a consumer hive mind and it’ll be as widely propagated as Snapchat and Instagram.
CO: But for right now, what about all the dumb people who are gonna get jobs with Cluely who are unqualified?
RL: If AI can do it then—muhfucka, like, you either got asked the wrong interview or your job is not that hard, in which case you’re good. Keep using AI, and you won’t get fired. Genuinely, name one job—doctors aren’t getting asked job interview questions that are not passable by AI. You can’t fuckin perform open heart surgery during your trial or whatever the fuck with AI. You can’t do that yet. Name one job whose interview you could pass with AI that you couldn’t just breeze through with AI.
CO: Teacher.
RL: Teacher? How do they interview teachers? Isn’t it just like a fuckin personality assessment to realize you’re not gonna fuckin diddle the kids or whatever? Like, you don’t have to be that smart to be a teacher. You can grade all your assignments as a teacher using AI, you can generate assignments using AI, all you gotta do is fuckin teach kids good values and moral practices. I don’t remember shit from my US History class in 8th grade. I remember my teacher, though. He was a G.
CO: What are your thoughts on Peter Thiel and Elon?
RL: I love Peter Thiel! I love Peter Thiel. He’s the fucking GOAT investor bro. I think he’s got so many good ideas. He’s an inspirational figure in the Valley.
CO: What’s his best idea, or your favorite of his ideas?
RL: I think he came up with the notion that the biggest companies are built by people who believe something different that nobody else agrees with them on. And this is the whole pinnacle of being a contrarian. And I agree with this completely and I think this is something I try and embody. The idea that cheating, all AI cheating today is a good thing, this is something I think, and seemingly nobody else agrees with me. But I think in a few very short years I will be proven right, and this will be the new status quo, and this will be normal, and the person who will have embraced it most, earliest on, will have built the biggest company, which will have been Cluely.
CO: So you hold your personal success to a really high level of importance.
RL: Personal success?
CO: Like financial, the glory—
RL: I don’t really care about money, I care about doing cool shit.
CO: Do you feel like you’re changing the world?
RL: Yes, in a good way.
CO: Well, it’s been a pleasure.
READ
NEXT
NEXT
PURCHASE
CLOCKED OUT is a print magazine created and edited by Fiona Pearl Miller and Zoe Laris-Djokovic, and designed by Anya Osipov.
CLOCKED OUT is meant to be an examination of how we spend our time. What are you thinking about? What are you supposed to be thinking about? What are you supposed to be doing? We don't know, but it's definitely worth talking about. So here we've sought to carve out a space where we can do exactly that—interact with a range of nuanced convictions of others living under the same reign of modernity, forcing ourselves into a position of, if not the expansion of our own convictions, at least a consideration.
CLOCKED OUT does not intend to tell you what to believe. This magazine knows nothing, has no party and nothing to gain. What we want to do here is provide a physical space for the interrogation of ideas and get at the essence of them—to read, think, and live deeply—and clocked out.
McNally Jackson, NYC
Mast Books, NYC
Spoonbill & Sugartown, Brooklyn
Secret Riso Club, Brooklyn
Quimby’s Bookstore, Chicago