Thomas Friedman & Eric Schmidt of Google @ Personal Democracy ForumThey’ve set up a nice little interview thing on stage, with Thomas Friedman from the New York Times and Eric Schmidt from Google sitting across from each other with a little IKEA table between them and a soft Persian carpet underneath them. It’s a Keynote Conversation. I don’t think I’ve seen one of them before.

“I don’t think the internet is as important as health care, for instance, but I think it’s almost as important. If you don’t have access to the internet, you don’t have access to the modern world.”

“This looks like a Google meeting, nobody’s looking at the speaker, everyone’s on their computers. As an old person this disturbs me! But this is a battle we’ve lost, it’s cultural change.” Mentions trying to stop people being online during staff meetings at Google – losing battle (he says this with a laugh).

Friedman: George Bush would never have been elected president if all his wild partying at Yale was on cellphone videos, all google-able. We got to write our resumÈs, but today’s employers use the entire impression they get about a person online.

[My response would be that if you blog you can still “write your own resumÈ” – if you google me, so much of the information you’ll find is actually written by me that you’ll still be seeing my representation of myself.]

Schmidt: My proposal is that at the age of 21 you should be allowed to change your name! In a sense, everyone is in the media all the time.

Friedman: Mentions a case where Google Earth shows the royal family of Bahrain’s palace in all its expanse and luxury. This is usually not visible to people – and so Bahrain blocked access to Google Maps.

[My response: Well, but Google greys out the Pentagon – why is that OK if Bahrain hiding their palace is “censorship”? Hm, actually, checking Google Maps I can see the Pentagon after all – I’m sure it used to be greyed out. People in the chat here are saying it used to be but no longer is – someone said Area 51 is, but I can see it too on Google Maps.]

Schmidt: The Great Firewall of China – Google actually tells Chinese users that this information is being omitted. The arrival of hte internet in China is changing politics in China. 140 million internet users. Google is gaining market share.

Friedman: If I want to get hired at Google, how do I go about it, and how do you go about finding out whether to hire me? How many people are applying for jobs with you every day?

Schmidt: We have hundreds of recruiters working worldwide. Originally they wanted to use a scientific way of finding the very best people. Look for people who are unusual, have an interest or passion. The fact that you have a broader set of interests means you’re more likely to be successful.

Friedman: Innovation has been said to happen when you have two different spheres that can interact.

Questions (they want to make sure questions come from paid attendees, not from the press – heh)

  • I’d like to know Google’s response to the military’s blocking social network sites? Schmidt: We’d prefer they not. [laughter]
  • About personalised search: Will it lock people into only seeing a particular world view? Schmit: That’s why we need good education. [basically refuses to consider that google would have any responsibility here]

Discover more from Jill Walker Rettberg

Subscribe to get the latest posts sent to your email.

2 thoughts on “keynote conversation with eric schmidt from google

  1. chris tuttle

    In case you’d like a photo for your live blogging… Feel free to use.
    http://www.flickr.com/photos/christuttle/503288965/

  2. Jill

    Thanks, Chris, I’ve added it in 🙂

Leave A Comment

Recommended Posts

Academics in Norway: Sign this petition asking for research-based discussions of how to use AI in universities

I just signed a petition calling for Norwegian universities to use research expertise on AI when deciding how to implement it, rather than having decisions be made mostly administratively. ,  If you are a researcher in Norway, please read it and sign it if you agree – and share with anyone else who might be interested. The petition was written by three researchers at UiT: Maria Danielsen (a philosopher who completed her PhD in 2025 on AI and ethics, including discussions of art and working life), Knut Ørke (Norwegian as a second language), and Holger Pötzsch (a professor of media studies with many years of research on digital media, video games, disruption, and working life, among other topics).  This is not about preventing researchers from exploring AI methods in their research. It is about not uncritically accepting the hype that everyone must use AI everywhere without critical reflection. It is about not introducing Copilot as the default option in word processors, or training PhD candidates to believe they will fall behind if they do not use AI when writing articles, without proper academic discussion. Changes like these should be knowledge-based and discussed academically, not merely decided administratively, because they alter the epistemological foundations of research. Maria wrote to me a couple of months ago because she had read my opinion piece in Aftenposten in which I called for a strong brake on the use of language models in knowledge work. She was part of a committee tasked with developing UiT’s AI strategy and was concerned because there was so much hype and so few members of the committee with actual expertise in AI. I fully support the petition. There are probably some good uses for AI in research, but the uncritical, hype-driven insistence that we must simply adopt it everywhere is highly risky. There are many researchers in Norway with strong expertise in AI, language, ethics, working life, and culture. We must make use of this expertise. This is also partly about respect for research in the humanities, social sciences, psychology, and law. Introducing AI at universities and university colleges is not merely a technical issue, and perhaps not even primarily a technical one. It concerns much more: philosophy of science, methodological reflection, epistemology, writing, publishing, the working environment, and more. […]

screenshot of Grammarly - main text in the middle, names of experts on the left with reccomendations and on the right more info about the expert review feature
AI and algorithmic culture Teaching

Grammarly generated fake expert reviews “by” real scholars

Grammarly is a full on AI plagiarism machine now, generating text, citations (often irrelevant), “humanizing” the text to avoid AI checkers and so on. If you’re an author or scholar, they also have been impersonating and offering “feedback” in your name. Until yesterday, when they discontinued the Expert Review feature due to a class action lawsuit. Here are screenshots of how it worked.