I just signed a petition calling for Norwegian universities to use research expertise on AI when deciding how to implement it, rather than having decisions be made mostly administratively. , If you are a researcher in Norway, please read it and sign it if you agree – and share with anyone else who might be interested. The petition was written by three researchers at UiT: Maria Danielsen (a philosopher who completed her PhD in 2025 on AI and ethics, including discussions of art and working life), Knut Ørke (Norwegian as a second language), and Holger Pötzsch (a professor of media studies with many years of research on digital media, video games, disruption, and working life, among other topics). This is not about preventing researchers from exploring AI methods in their research. It is about not uncritically accepting the hype that everyone must use AI everywhere without critical reflection. It is about not introducing Copilot as the default option in word processors, or training PhD candidates to believe they will fall behind if they do not use AI when writing articles, without proper academic discussion. Changes like these should be knowledge-based and discussed academically, not merely decided administratively, because they alter the epistemological foundations of research. Maria wrote to me a couple of months ago because she had read my opinion piece in Aftenposten in which I called for a strong brake on the use of language models in knowledge work. She was part of a committee tasked with developing UiT’s AI strategy and was concerned because there was so much hype and so few members of the committee with actual expertise in AI. I fully support the petition. There are probably some good uses for AI in research, but the uncritical, hype-driven insistence that we must simply adopt it everywhere is highly risky. There are many researchers in Norway with strong expertise in AI, language, ethics, working life, and culture. We must make use of this expertise. This is also partly about respect for research in the humanities, social sciences, psychology, and law. Introducing AI at universities and university colleges is not merely a technical issue, and perhaps not even primarily a technical one. It concerns much more: philosophy of science, methodological reflection, epistemology, writing, publishing, the working environment, and more. […]
David
Bebo and Myspace are worse in terms of privacy than facebook in my opinion. Bebo and Myspace profiles regularly appear in google searches. I have a friend that works for an insurance company that make their employees google new applicants searching for such profiles in the hope of weeding out drug users and binge drinkers.
Tama Leaver dot Net » Blog Archive » Annotated Links of Interest: October 28th 2008
[…] Putting Privacy Settings in the Context of Use (in Facebook and elsewhere) [apophenia] – danah boyd’s sensible and timely reminder about Facebook’s ridiculously complicated and confusing privacy settings: “Facebook’s privacy settings are the most flexible and the most confusing privacy settings in the industry. Over and over again, I interview teens (and adults) who think that they’ve set their privacy settings to do one thing and are shocked (and sometimes horrified) to learn that their privacy settings do something else. Furthermore, because of things like tagged photos, people are often unaware of the visibility of content that they did not directly contribute. People continue to get themselves into trouble because they lack the control that they think they have.” [Via Jill] (These, incidentally, are among the reasons why you won’t see any pictures of my son on Facebook! Flickr, where I retain copyright and can actually use meaningful privacy settings, is far preferable!) […]