Skip Navigation

What do you think about this debate?

Let me give you some context. Two important figures in the field of artificial intelligence are taking part in this debate. On the one hand, there is George Hotz, known as "GeoHot" on the internet, who became famous for reverse-engineering the PS3 and breaking the security of the iPhone. Fun fact: He has studied at the Johns Hopkins Center for Talented Youth.

On the other hand, there's Connor Leathy, an entrepreneur and artificial intelligence researcher. He is best known as a co-founder and co-lead of EleutherAI, a grassroots non-profit organization focused on advancing open-source artificial intelligence research.

Here is a detailed summary of the transcript:

14 comments
  • Both of them forget that there’s already a mal-aligned all powerful entity that’s manipulating all of us to act in the interests of a select few rather than the many. And it doesn’t need AI to do it.

    • I fully agree. And not only that, I'm also intrigued to know what licence GeoHot would choose to launch such an open source AI. If he chose the more libertarian option, he would probably use the MIT license. If so, any powerful entity could take that AI as a base, lock down the code and build a malicious AI based on the open source AI. In the end, all efforts to "democratise" open source AI would be in vain.

  • "technical finesse of elon musk with the wits and charm of tony stark" this is as far as i got

    • Both of those sound like backhanded insults lol.

    • Fair enough 😅 I know the participants are cringe, but I have shared it because I would like to hear your opinion from a Marxist perspective. GeoHot is an accelerationist and Connor I think tries to be "apolitical", you know... lol

      Anyway, I've put in the description of the post a summary of the transcript in case someone wants to know what they say without having to watch the video.

      • I feel like tech people worry too much about AGI which is a bit baffling to me. I don't think AGI is even conceivable at this point because of which a lot of what they talk about sounds like scifi world building.

        Like when Hotz says that AI technologies will improve exponentially, I don't know how he can just accept that as a fact. Sounds a bit tech utopian.

        What I worry about more is that the internet is gonna be flooded with AI generated garbage to exploit SEO for clicks. I don't want AI to replace artists, voice actors, programmers etc. because the current trajectory seems to be heading towards removing removing human labour so that the capitalist class can keep a bigger chunk of the profit rather than towards AI being used as a tool to enhance productivity.

        AI being a monopoly of big corporations is also an issue. I don't know what kind of resources it takes to train and run an LLM. But a corporation like OpenAI flush with vulture capital money will be much better placed to run the whole training pipeline. It must be a very labour and compute intensive process that a non-profit will not be able to match even if the underlying algorithms are open source. I doubt software running on a consumer's machine like LLAMA will be able to compete with something like GPT.

        I am not being coherent because I try to keep myself out of the loop when it comes to AI because I have a knee jerk aversion towards the technology. But I hope I made a semblance of a point. AGI scare is a red herring. Execute Bill Gates.

  • highly recommend these two essays from Ted Chiang on the subject

    Personally, I think that you can't really put the toothpaste back in the tube at this point. Now that we've had a glimpse of the possibilities that AI offers, it will continue being developed rapidly across the globe. What's more, any countries that try to put brakes on AI development will quickly find themselves at a disadvantage from countries that don't. For this reason alone, AI will be seen as a national security concern by all major nations.

    There are obviously lots of applications in the realm of automation for AI, but I think where it could become game changing is in terms of large scale planning. For example, an AI could monitor usage of resources and allocate production and allocation of these resources in real time. This would allow for unprecedented level of economic planning efficiency. China already has a huge amount of automation and robotics in the industry. Imagine that being coupled with automated planning. Another important use could be watching global trends. An AI could potentially predict global economic downturns, wars, pandemics, you name it. A country that has such a predictive engine would be able to mitigate the impact of such events a lot better than others.

    All that said, we are nowhere close to having any sort of AGI at the moment. What we have currently are glorified Markov chains that are trained on stupendous amounts of data, but have no meaningful understanding of that data in a human sense. All these models know is that a particular set of symbols tends to follow a particular different set of symbols. They simply encode statistical relationships without any real context around them.

    One promising path forward is using embodiment, where the model is coupled with either a virtual avatar or a physical robot. Then the model is trained to interact with the physical world through reinforcement and this leads it to to create an internal representation of the world that's similar to our own. This gives us a shared context that we can use to communicate with the model trained in this fashion. Such a model would have actual understanding of the physical world that's similar to our own, and then we could teach it language based on this shared understanding. At that point, you could tell the robot to get a cup from a table, and it would have an idea of what a table and a cup map to in its environment.

    It's hard to say whether current LLM approaches are flexible enough to support this sort of an AI, so we'll have to wait and see what the ceiling for this stuff is. I do think we will figure this out eventually, but we may need more insights into how the brain works before that happens.

    • What’s more, any countries that try to put brakes on AI development will quickly find themselves at a disadvantage from countries that don’t. For this reason alone, AI will be seen as a national security concern by all major nations

      In fact, we have seen that Americans are becoming increasingly fearful of AIs, in contrast to the Chinese, who generally trust AIs. This could be due to who has control over AIs. In the US, citizens are thinking about the most dystopian version of a large-scale implementation of these intelligence models because they know that the government will use it to further repress the working class. In China, government regulation of AIs generates trust because they trust the government. But as I mentioned in another comment, an open source AI for the whole population would be useless if such code is governed by a libertarian license like MIT/Apache 2.0, because of how easy it would be for the ruling class to appropriate this work to privatize and improve it to such an extent that the original code could not be measured against it.

      This would allow for unprecedented level of economic planning efficiency.

      Yes, in fact, isn't that what the Chileans had in mind when they came up with Cybersyn? With the technological advances of our era, especially in the field of AI and so on, it would make sense to go back to this idea. China has the potential to implement it on a large scale in my opinion.

      Then the model is trained to interact with the physical world through reinforcement and this leads it to to create an internal representation of the world that’s similar to our own. This gives us a shared context that we can use to communicate with the model trained in this fashion. Such a model would have actual understanding of the physical world that’s similar to our own, and then we could teach it language based on this shared understanding.

      Regarding what you mention, I have a question (maybe it sounds stupid), but assuming that these AI learn and develop in a particular environment and become familiar with it in a similar way to humans, what would happen if these AI interact with something or someone outside that environment? That is, for example, if an AI develops in an English-speaking country (environment) and for some reason interacts with a Spanish-speaking person, the cultural peculiarities that the AI has learned in that environment are not applicable to this subject. Do you think it could give a false sense of closeness or technical limitation? idk if I'm making myself clear or if this is an absurd question 😅

      • Very much agree that ultimately the question is about ensuring that the AI is in the hands of the working class and not the oligarchs. And I think you've nailed it regarding attitudes towards AI in US and China respectively. People in China know that the government represents them and they trust the government to use this technology in their best interest. Meanwhile, in US, everyone knows the government represents the rich and AI will be used to squeeze the working class even harder.

        Forgot all about the Cybersyn idea, Soviets had similar ideas as well. I definitely think this sort of thing could work, and completely agree that China is in the best position to make it happen today.

        Regarding the last question, I expect we'd see similar types of problems we see with humans where people can often have a hard time adjusting to different cultures, learning new languages, and so on. And that's the optimistic scenario because the human mind if far more flexible than any AI we've managed to create so far. It's really important to keep in mind that this tech is still very limited in practice, and a lot of claims made around it are just hype.

        I think the kind of contextual learning we could expect would be something like Boston Dynamics style robots that can navigate the environment, and do some basic communication with humans in a restricted context. This can still be extremely useful as you could use such robots in places like factories.

    • There are obviously lots of applications in the realm of automation for AI, but I think where it could become game changing is in terms of large scale planning. For example, an AI could monitor usage of resources and allocate production and allocation of these resources in real time. This would allow for unprecedented level of economic planning efficiency. China already has a huge amount of automation and robotics in the industry. Imagine that being coupled with automated planning. Another important use could be watching global trends. An AI could potentially predict global economic downturns, wars, pandemics, you name it. A country that has such a predictive engine would be able to mitigate the impact of such events a lot better than others.

      This is possibly the best summary on what direction I think AI should focus on. Right now we have way too many AI research orgs focusing on human-facing systems (chatbots, robots, AI art) that are neat, rather than optimisation engines that can revolutionise an industry.

      I don't know much about the history of it, but during the Cold War there was a bit of a "silent revolution" in the area of Operations Research led simultaneously by Soviet mathematicians trying to model a planned economy and Statesian military modelling their gigantic supply lines. Neural Networks (which is what people usually mean by AI) opmisation algorithms were an offshoot of that area, but sadly advanced material on stuff like "constrained non-linear optimisation" is on very few university curriculums so few students realise the connections and apply the new methods to the age old problems.

      Stafford Beer (the Cybersyn guy) was one leading expert in the area.

      "Towards New Socialism" by Cockshott and "The People's Republic of Walmart" by Phillips are up next in my reading list and I haven't read much, but seem like good books to understanding how the massive improvements in the area of mathematical optimisation (of which Neural Networks are a subset) could allow for an even better planned economy.

      • I suspect that the human-facing focus is an artifact of how western economies are organized. Since there is very little industry, a lot of business activity focuses on the service industry and hence that's where the focus for automation is. On the other hand, China is a huge industrial power, and naturally they're looking at ways to use AI for industrial automation and logistics.

        And yeah, USSR was always big on this idea of figuring out central planning, and if this project took off then it might've ended up leading in IT instead of the US. I'd say this was one of the most unfortunate mistakes made by the Soviet leadership.

14 comments