Woman smiling at a cocktail party.

Last night I attended the OpenAI Forum Welcome Reception at OpenAI’s new offices in San Francisco. The Forum is a recently launched initiative from OpenAI that is meant to be “a community designed to unite thoughtful contributors from a diverse array of backgrounds, skill sets, and domain expertise to enable discourse related to the intersection of AI and an array of academic, professional, and societal domains”, and this was their first in-person gathering.

I was the only attendee from Norway and I think from Europe. You see, at first I thought this was an online event, and asked whether there would be other welcome events more suited to European timezones. The amazingly friendly coordinator, Natalie Cone, wrote back fast: “No, it’s an in person event! We’re thinking of doing one later in London – but you could always fly in for this one!” After a few moments feeling grumpy about all the opportunities available to researchers in the US that people like me in small European countries can’t access, I realised that actually, yes, I could fly in – part of the glory of ERC funding is that I have money available to do things like this, and given that my project aims to “develop knowledge that can contribute to a society where we develop technology that is good for us,” it makes a lot of sense to build networks with people who are building some of the most influential technology around right now.

The reception was held at OpenAI’s new offices in the Mission District. The entrance was discreet and not marked, but there were friendly people at the address we had been given who cheerfully greeted us and ushered us in to a pleasantly decorated foyer. Our QR codes were scanned, and we were each given a name tag and a glass of champagne.

The event was a reception, with a jazz band and amazing food from Osito (it was so good!) but the best thing was simply the people. It seemed that every person I spoke to was thinking about AI in intriguing ways. Arka Dahr gave a brief welcome speech, explaining that they wanted to bring people together to generate ideas. He expected we arrived with excitement about AI as well as maybe some concern, he said, and said the goal is to develop AI as constructively as possible.

Ilya Sutskever, the co-founder and chief scientist of OpenAI, also gave a very brief welcome speech. Sutskever famously leads the OpenAI superalignment team. He has said he believes a superintelligence (which OpenAI describes as more powerful than just Artificial General Intelligence or AGI) could be developed within a decade. I find that hard to believe – but at least OpenAI is focusing on how to align such a superintelligence with human values (thus “superalignment”) rather than just pushing the narrative that it will destroy us all.

Sutskever spoke slowly and deliberately in his short speech. “The creation of AGI is the most profound thing that can be done,” he said. “OpenAI can train models, but AGI is so much more than just models.” What an interesting opening for an event designed to include academics and artists as well as tech developers.

It’s easy to be scared of or at least skeptical to OpenAI. They are a huge and powerful player in the rapidly developing field of AI. Although their models are available to everyone, they are not open about what the models are trained on. They have resources to do things we academics can only dream of, and from a European perspective there is an obvious danger of cultural imperialism where American models become the default and European culture is eroded. As I’ve written about previously, LLMs like ChatGPT can speak many languages, but they recreate cultural biases in their datasets – not just the racist and sexist biases that are often discussed and can be reduced by adding alignment through human feedback (RLHF or reinforcement learning with human feedback), but deeper structures that will be harder to deal with.

A Google ad just a block from my San Francisco hotel asks “Can your AI speak 135 languages?” It would be easy to read this through the lens of American cultural imperialism. Multilingualism gives power to tech companies – but then it might also give power to people who speak small languages who can now participate more fully. This isn’t a simple good/bad situation.

The best thing about the reception was the people, and having plenty of time to actually talk with people. I discussed whether AI could be said to have freedom of speech with a person working on a PhD in law on the topic (she says possibly), and whether AI could ever be said to be conscious with a professor of philosophy and a person from OpenAI (not really, although it depends on what you think consciousness is). I met a couple of people from Berkeley I’d met previously, and was thrilled to discuss how their work connected to AI.

I met an artist from RISD, Daniel Lefcourt. I love these videos he has made of people speaking the prompts that generated their images. He said he used Midjourney to generate the images, then fed them into a commercial system that allows you to upload a single image and a text and it generates a talking head video for you.

I also got to ask Ilya Sutskever, the co-founder and chief scientist of OpenAI, what he thought I should tell people at the Norwegian Ministry of Education and Research on Monday, when I’m giving a talk on AI (what else) at their annual conference.

“They’ve asked me to talk about whether we can trust ChatGPT,” I explained, “whether it generates factual information or not.” Of course, right now, ChatGPT cannot be trusted: it generates strings of words and sentences that are statistically probable based on its training data, and often makes things up, so the answer, for now, is pretty obvious to me: use ChatGPT for brainstorming and inspiration and summarising but don’t rely on it for writing truthful, factual texts: that’s not how it works. But I asked Sutskever anyway.

Sutskever looked into the distance for a moment, seeming to consider his words carefully. “It may not be completely truthful now. But we are working fast. Within a year a lot will have changed.”

I asked him about the cultural biases I wrote about in my post on ChatGPT as multilingual but monocultural as well, and again, he said this will improve radically in a short time.

While we visitors all wore name tags, the OpenAI employees just had their keycards hanging from their belts so you couldn’t see their names, so walking up to someone in a black t-shirt and no name tag was a good way to start a conversation with someone actually developing the OpenAI models. I discussed multimodal perception with Rowan Zellers, who wrote his PhD on it and now researches this for OpenAI (Rowan’s name tag was more visible than most of the other interesting OpenAI people I talked with so I remembered his name). I was curious as to how many different modalities they might aim to perceive. Text and images, sure, and spoken language – but what about smell? Body language? Maybe reading brain waves like in that NY Times article yesterday about the brain-to-text interface being developed for people who can’t speak after a stroke? Actually I guess I was suggesting most of these things – the OpenAI people were very friendly, but of course, conversations never seemed to go anywhere that I wouldn’t have been able to read about in papers they’ve published anyway.

As we walked back out on the street as the event came to a close, I saw a couple of other attendees jumping into a strange-looking car. “It’s driverless!” someone called out excitedly. And sure enough, there was no driver in the car, and a lot of strange cameras and devices on the roof. It was indeed driverless. The people getting in explained you just get an app and book them like any other rideshare. “It’s easy,” they said, clearly enjoying themselves.

I suppose I shouldn’t have been surprised that only two of the participants in the OpenAI Forum Welcome Event left in a driverless car. The rest of us stood around waiting for human-driven rideshares…

I checked the website later, hoping I could book one and get my first experience of a driverless car. But alas, although San Francisco is one of the three cities offering the service, there is a waitlist and they want to know your zipcode, so I’m guessing tourists would not be prioritised. Still, might be worth signing up so I can ride one next time, right? [Update: I later saw this NYTimes video of the other driverless rideshare in SF: Google’s Waymo.]

The Lyft I got back to my hotel had a lovely human driver who turned out to be a Mexican-Australian who was excited to be going to Sydney the next day for a family wedding. Being full of social energy after the reception, I was thrilled to have a chatty driver. See that’s the thing: I love technology, but you know what, humans are pretty amazing. I love experiencing multimodal embodied perception with other human beings.

2 thoughts on “Visiting OpenAI

  1. Ava Noah

    That’s very nice, Please share your experience at the AI forum and share their future plans. Are they planning to help students improve their writing skills? because students are obtaining mathematics assignment help online and now they want to improve their own writing skills and also want to enhance their knowledge.

  2. Telkom University

    What were some notable insights or discussions that emerged during the OpenAI Forum Welcome Reception, particularly regarding the intersection of AI with academic, professional, and societal domains, and how do these conversations reflect the diverse perspectives of participants from various backgrounds and expertise? Regards Telkom University

Leave A Comment

Recommended Posts

Triple book talk: Watch James Dobson, Jussi Parikka and me discuss our 2023 books

Thanks to everyone who came to the triple book talk of three recent books on machine vision by James Dobson, Jussi Parikka and me, and thanks for excellent questions. Several people have emailed to asked if we recorded it, and yes we did! Here you go! James and Jussi’s books […]

Image on a black background of a human hand holding a graphic showing the word AI with a blue circuit board pattern inside surrounded by blurred blue and yellow dots and a concentric circular blue design.
AI and algorithmic culture Machine Vision

Four visual registers for imaginaries of machine vision

I’m thrilled to announce another publication from our European Research Council (ERC)-funded research project on Machine Vision: Gabriele de Setaand Anya Shchetvina‘s paper analysing how Chinese AI companies visually present machine vision technologies. They find that the Chinese machine vision imaginary is global, blue and competitive.  De Seta, Gabriele, and Anya Shchetvina. “Imagining Machine […]

Do people flock to talks about ChatGPT because they are scared?

Whenever I give talks about ChatGPT and LLMs, whether to ninth graders, businesses or journalists, I meet people who are hungry for information, who really want to understand this new technology. I’ve interpreted this as interest and a need to understand – but yesterday, Eirik Solheim said that every time […]