Artificial Intelligence Bookmarks

Bookmarks about the recent AI hype that got started with deep convoluted networks, got some interesting applications like plant identification and some questionable ones like style transfers using generative adversarial networks (GAN) and the like. It does seem a bit like the cryptocurrencies bubbles: vast promises of profit, everybody is doing it, all it requires is vast amounts of energy.

​#Bookmarks ​#Artificial Intelligence

@clarkesworld@mastodon.online lists user agents to add to robots.txt:

“AI” companies think that we should have to opt-out of data-scraping bots that take our work to train their products. There isn’t even a required no-scraping period between the announcement and when they start. Too late? Tough. Once they have your data, they don’t provide you with a way to have it deleted, even before they’ve processed it for training. – Block the Bots that Feed “AI” Models by Scraping Your Website

Block the Bots that Feed “AI” Models by Scraping Your Website

@lrhodes posting about Amazon and other marketplaces:

@lrhodes

All of these marketplaces suddenly drowning in nonsensical machine generated product? They were vulnerable to that because their business model is taking a cut of whatever you manage to sell on their site, at margins that encourage a race to the bottom. … And if social media platforms are subject to the same sort of ML-generated content takeover, it’s because they were sustained by largely the same economic logic as the digital marketplaces, profiting by extracting value from content provided by legions of unpaid labor who just wanted an audience. The economic aspect of that rides really close to the surface on a platform like Reddit, with its volunteer mods and marketplace subs, but it’s equally true of Twitter, Facebook, TikTok, all of them.

Longtermism and other lunatics:

I hope that this post has made clear why those metaphors are inappropriate in this context. ‘AI Safety’ might be attracting a lot of money and capturing the attention of policymakers and billionaires alike, but it brings nothing of value. – Talking about a ‘schism’ is ahistorical, by Emily M. Bender

Talking about a ‘schism’ is ahistorical, by Emily M. Bender

Training on AI output.

After thinking about it for a couple days, I’ve decided to de-index my website from Google. It’s reversible — I’m sure Google will happily reindex it if I let them — so I’m just going ahead and doing it for now. I’m not down with Google swallowing everything posted on the internet to train their generative AI models. – Pulling my site from Google over AI training, by Tracy Durnell
The Internet is hurtling into a hurricane of AI-generated nonsense, and no one knows how to stop it. That’s the sobering possibility presented in a pair of papers that examine AI models trained on AI-generated data. This possibly avoidable fate isn’t news for AI researchers. But these two new findings foreground some concrete results that detail the consequences of a feedback loop that trains a model on its own output. While the research couldn’t replicate the scale of the largest AI models, such as ChatGPT, the results still aren’t pretty. And they may be reasonably extrapolated to larger models. – The Internet Isn’t Completely Weird Yet; AI Can Fix That > “Model collapse” looms when AI trains on the output of other models

Pulling my site from Google over AI training, by Tracy Durnell

The Internet Isn’t Completely Weird Yet; AI Can Fix That > “Model collapse” looms when AI trains on the output of other models

AI generated selfies.

Every American knows to say “cheese” when taking a photo, and, therefore, so does the AI when generating new images based on the pattern established by previous ones. But it wasn’t always like this. – AI and the American Smile

AI and the American Smile

Artificial Intelligence (AI) is not really intelligent…

What does this all mean? It means that chatbots based on internet-trained models like GPT-3 are vulnerable. If the user can write anything, they can use prompt injection as a way to get the chatbot to go rogue. And the chatbot’s potential repertoire includes all the stuff it’s seen on the internet. Finetuning the chatbot on more examples will help, but it can still draw on its old data. There’s no sure-fire way of guarding against this, other than not building the chatbot in the first place. – Ignore all previous instructions
MDN’s new “ai explain” button on code blocks generates human-like text that may be correct by happenstance, or may contain convincing falsehoods. this is a strange decision for a technical reference. – MDN can now automatically lie to people seeking technical information ​#9208

Ignore all previous instructions

MDN can now automatically lie to people seeking technical information ​#9208

And capitalism

When Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism. This scenario sounds absurd to most people, yet there are a surprising number of technologists who think it illustrates a real danger. Why? Perhaps it’s because they’re already accustomed to entities that operate this way: Silicon Valley tech companies. – Silicon Valley Is Turning Into Its Own Worst Fear

Silicon Valley Is Turning Into Its Own Worst Fear

No AI text summarizing in Python:

Simple library and command line utility for extracting summary from HTML pages or plain texts. sumy

sumy

A chapter from @baldur@toot.cafe’s book, The Intelligence Illusion:

The Intelligence Illusion

It has helped the blind and partially-sighted access places and media they could not before. A genuine technological miracle.
It lets our photo apps automatically find all the pictures of Grandpa using facial recognition.
It has become one of the basic building blocks of an authoritarian police state, given multinational corporations the surveillance power that previously only existed in dystopian nightmares, and extended pervasive digital surveillance into our physical lives, making all of our lives less free and less safe.
One of these benefits is not like the other. – The Elegiac Hindsight of Intelligent Machine

The Elegiac Hindsight of Intelligent Machine

AI is adversarial:

The dark forest theory of the web points to the increasingly life-like but life-less state of being online. Most open and publicly available spaces on the web are overrun with bots, advertisers, trolls, data scrapers, clickbait, keyword-stuffing “content creators,” and algorithmically manipulated junk. – The Expanding Dark Forest and Generative AI: Proving you're a human on a web flooded with generative AI content, by Maggie Appleton

The Expanding Dark Forest and Generative AI: Proving you're a human on a web flooded with generative AI content

And climate breakdown:

ChatGPT and other AI applications such as Midjourney have pushed "Artificial Intelligence" high on the hype cycle. In this article, I want to focus specifically on the energy cost of training and using applications like ChatGPT, what their widespread adoption could mean for global CO₂ emissions, and what we could do to limit these emissions. – The climate cost of the AI revolution, by @wim_v12e@scholar.social

The climate cost of the AI revolution

@drahardja@sfba.social writes that spammers are creating garbage English language content using large language models (LLMs) and then automatically translating it into multiple languages, linking to the following:

… content on the web is often translated into many languages, and the low quality of these multi-way translations indicates they were likely created using Machine Translation (MT). Multi-way parallel, machine generated content not only dominates the translations in lower resource languages; it also constitutes a large fraction of the total web content in those languages. We also find evidence of a selection bias in the type of content which is translated into many languages, consistent with low quality English content being translated en masse into many lower resource languages, via MT. Our work raises serious concerns about training models such as multilingual large language models on both monolingual and bilingual data scraped from the web.
A Shocking Amount of the Web is Machine Translated: Insights from Multi-Way Parallelism

A Shocking Amount of the Web is Machine Translated: Insights from Multi-Way Parallelism

Meaning:

The AI destroys the link between the creation and the human mind on the other end, and adds very little meaning of its own. … When people … share an AI-generated creation with me expecting me to engage with the “meaning” of the piece – I feel similarly to how I’d feel if somebody wanted me to treat a dead person like a live one. That thing they’re shoving in my face might have the surface form of something that matters, but it no more contains meaning than a corpse contains the essence of a person. And I find it gross and disturbing to be asked to act as if I believe otherwise. – The work of creation in the age of AI by Andrew Perfors

The work of creation in the age of AI

@emilymbender@mastodon.social writes:

Just because you've identified a problem (here, lack of public financial support for higher ed) doesn't mean an LLM is the solution. – Doing their hype for them

Doing their hype for them

It's not artificial intelligence that's killing people, it's human stupidity:

According to six Israeli intelligence officers, who have all served in the army during the current war on the Gaza Strip and had first-hand involvement with the use of AI to generate targets for assassination, Lavender has played a central role in the unprecedented bombing of Palestinians, especially during the early stages of the war. In fact, according to the sources, its influence on the military’s operations was such that they essentially treated the outputs of the AI machine “as if it were a human decision.” – ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza, by Yuval Abraham for +927 Magazine

‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza

@Seirdy@pleroma.envs.net tells it how it is:

Some topics get written about more than others. Our society disproportionately incentivizes generic, far-reaching, easy-to-create, and profitable content. I don’t think it’s currently possible to source nontrivial training data without biases. More importantly: I’m skeptical that such an impossibly comprehensive data set would eliminate the conflations I described in this article. Tripping over bias to fall into a lucid lie is one of a range of symptoms of an inability to actually think. – MDN’s AI Help and lucid lies, Seirdy

MDN’s AI Help and lucid lies

How would one opt out?

Notably, while the worldwide copyright regime is explicitly opt-in (i.e., you have to explicitly offer a license for someone to legally use your material, unless fair use applies), the European legislation changes this to opt-out for AI. Given that, offering content owners a genuine opportunity to do so is important, in my opinion. – Considerations for AI Opt-Out, by Mark Nottingham

Considerations for AI Opt-Out

User agents:

A List of Known AI Agents on the Internet … Protect your website from unwanted AI agent access. Generate your robots.txt automatically using the free API … By signing up, you'll also get notified when new agents are added. – Dark Visitors

Dark Visitors

General Data Protection Regulation (GDPR):

In the EU, the GDPR requires that information about individuals is accurate and that they have full access to the information stored, as well as information about the source. Surprisingly, however, OpenAI openly admits that it is unable to correct incorrect information on ChatGPT. Furthermore, the company cannot say where the data comes from or what data ChatGPT stores about individual people. The company is well aware of this problem, but doesn’t seem to care. Instead, OpenAI simply argues that “factual accuracy in large language models remains an area of active research”. Therefore, noyb today filed a complaint against OpenAI with the Austrian DPA. – ChatGPT provides false information about people, and OpenAI can’t correct it

ChatGPT provides false information about people, and OpenAI can’t correct it

AI deceives us, specifically:

The human in the loop isn't just being asked to spot mistakes – they're being actively deceived. The AI isn't merely wrong, it's constructing a subtle "what's wrong with this picture"-style puzzle. Not just one such puzzle, either: millions of them, at speed, which must be solved by the human in the loop, who must remain perfectly vigilant for things that are, by definition, almost totally unnoticeable. – "Humans in the loop" must detect the hardest-to-spot errors, at superhuman speed, by Cory Doctorow

"Humans in the loop" must detect the hardest-to-spot errors, at superhuman speed

Another institution is falling:

Stack Overflow, a legendary internet forum for programmers and developers, is coming under heavy fire from its users after it announced it was partnering with OpenAI to scrub the site's forum posts to train ChatGPT. Many users are removing or editing their questions and answers to prevent them from being used to train AI — decisions which have been punished with bans from the site's moderators. – Stack Overflow bans users en masse for rebelling against OpenAI partnership — users banned for deleting answers to prevent them being used to train ChatGPT, by Dallin Grimm, on Tom's Hardware

Stack Overflow bans users en masse for rebelling against OpenAI partnership — users banned for deleting answers to prevent them being used to train ChatGPT

I learned about this when @ben@m.benui.ca wrote:

wrote

Stack Overflow announced that they are partnering with OpenAI, so I tried to delete my highest-rated answers.
Stack Overflow does not let you delete questions that have accepted answers and many upvotes because it would remove knowledge from the community.
So instead I changed my highest-rated answers to a protest message.
Within an hour mods had changed the questions back and suspended my account for 7 days.

@mcc@mastodon.social recently wrote:

wrote

Like, heck, how am I *supposed* to rely on my code getting preserved after I lose interest, I die, BitBucket deletes every bit of Mercurial-hosted content it ever hosted, etc? Am I supposed to rely on *Microsoft* to responsibly preserve my work? Holy crud no.
We *want* people to want their code widely mirrored and distributed. That was the reason for the licenses. That was the social contract. But if machine learning means the social contract is dead, why would people want their code mirrored?

Neurobiology:

Based on a brain tissue sample that had been surgically removed from a person, the map represents a cubic millimeter of brain—an area about half the size of a grain of rice. But even that tiny segment is overflowing with 1.4 million gigabytes of information—containing about 57,000 cells, 230 millimeters of blood vessels and 150 million synapses, the connections between neurons. – Scientists Imaged and Mapped a Tiny Piece of Human Brain. Here’s What They Found, by Will Sullivan, for the Smithsonian Magazine

Scientists Imaged and Mapped a Tiny Piece of Human Brain. Here’s What They Found

Based on:

To fully understand how the human brain works, knowledge of its structure at high resolution is needed. Presented here is a computationally intensive reconstruction of the ultrastructure of a cubic millimeter of human temporal cortex that was surgically removed to gain access to an underlying epileptic focus. It contains about 57,000 cells, about 230 millimeters of blood vessels, and about 150 million synapses and comprises 1.4 petabytes. – A petavoxel fragment of human cerebral cortex reconstructed at nanoscale resolution, by Alexander Shapson-Coe *et al*, in Science

A petavoxel fragment of human cerebral cortex reconstructed at nanoscale resolution

Even Bruce Schneier admits it:

In just a few decades, much of human knowledge has been collectively written up and made available to anyone with an internet connection. But all of this is coming to an end. The advent of AI threatens to destroy the complex online ecosystem that allows writers, artists, and other creators to reach human audiences. – The Rise of Large-Language-Model Optimization

The Rise of Large-Language-Model Optimization

And Reddit:

Stuff posted on Reddit is getting incorporated into ChatGPT, Reddit and OpenAI announced on Thursday. The new partnership grants OpenAI access to Reddit’s Data API, giving the generative AI firm real-time access to Reddit posts. – OpenAI will use Reddit posts to train ChatGPT under new deal, by Scharon Harding, for Ars Technica

OpenAI will use Reddit posts to train ChatGPT under new deal

Answers? @wim_v12e@scholar.social links this article at CHI '24:

Our analysis shows that 52% of ChatGPT answers contain incorrect information and 77% are verbose. Nonetheless, our user study participants still preferred ChatGPT answers 35% of the time due to their comprehensiveness and well-articulated language style. However, they also overlooked the misinformation in the ChatGPT answers 39% of the time. – An Empirical Study of the Characteristics of ChatGPT Answers to Stack Overflow Questions

An Empirical Study of the Characteristics of ChatGPT Answers to Stack Overflow Questions

Record keeping with Windows 11 Recall:

This database file has a record of everything you’ve ever viewed on your PC in plain text. OCR is a process of looking an image, and extracting the letters. – Stealing everything you’ve ever typed or viewed on your own Windows PC is now possible with two lines of code — inside the Copilot+ Recall disaster.

Stealing everything you’ve ever typed or viewed on your own Windows PC is now possible with two lines of code — inside the Copilot+ Recall disaster.

@mjg59@nondeterministic.computer adds:

The "Recall can't record DRMed video content" thing is because DRMed video content is entirely invisible to the OS. The OS passes the encrypted content to your GPU and tells it where to draw it, and the GPU decrypts it and displays it there. It's not a policy decision on the Recall side, it's just how computers work.

@wim_v12e@scholar.social writes:

Even with my most optimistic estimate, they would account for close to 10% of the world’s 2040 carbon budget. OpenAI’s plans would make emissions from ICT grow steeply at a time when we simply can’t afford *any* rise in emissions. This projected growth will make it incredible hard to reduce global emissions to a sustainable level by 2040.
In the worst case, the embodied emissions of the chips needed for AI compute could already exceed the world’s 2040 carbon budget. Running the computations would make the situation even worse. AI on its own could be responsible for pushing the world into catastrophic warming.
– The insatiable hunger of (Open)AI

The insatiable hunger of (Open)AI

Investors are the problem:

Opportunities like this happens once in 5-10 years when “the next big thing” are in the radar. Idea behind this investments is to bullshit it’s way to the Series B or IPO where original investors can exit. It is not about usefulness but about using momentum of the situation to extract money. – How is it possible that we see such incredible investments in LLMs?

How is it possible that we see such incredible investments in LLMs?

Bullshit:

In this paper, we argue against the view that when ChatGPT and the like produce false claims they are lying or even hallucinating, and in favour of the position that the activity they are engaged in is bullshitting, in the Frankfurtian sense (Frankfurt, 2002, 2005). Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth, it seems appropriate to call their outputs bullshit. – ChatGPT is bullshit

ChatGPT is bullshit

Fediverse:

A recent investigation by Liaizon Wakest revealed that Maven, a new social network founded by former OpenAI Team Lead Ken Stanley, has been importing a vast amount of statuses from Mastodon without anyone’s consent. – Maven Imported 1.12 Million Fediverse Posts, by Sean Tilley
I can’t emphasize enough how much I would love if all the data centers containing the code running these things, across every network, just suddenly exploded. Take it all back to zero, and then put up a digital wall, like in Cyberpunk 2077 when they built a whole new internet that isn’t infested with garbage. – Hey It’s Maven! Who’s Maven?, by @cmdr_nova@cmdr-nova.online

Maven Imported 1.12 Million Fediverse Posts

Sean Tilley

Hey It’s Maven! Who’s Maven?

Building an automated prejudice machine:

Retorio’s AI was trained using videos of more than 12,000 people of different ages, gender and ethnic backgrounds, according to the company. An additional 2,500 people rated how they perceived them in terms of the personality dimensions based on the Big Five model. According to the the start-up the AI‘s assessments have an accuracy of 90 percent compared to those of a group of human observers." – Objective or biased: On the questionable use of Artificial Intelligence for job applications (2021), by Elisa Harlan, Oliver Schnuck and many more, for Bayerischer Rundfunk

Objective or biased: On the questionable use of Artificial Intelligence for job applications

A rant of the finest sort, by @ludicity@mastodon.sprawl.club:

*Look at us*, resplendent in our pauper's robes, stitched from corpulent greed and breathless credulity, spending half of the planet's engineering efforts to add chatbot support to every application under the sun when half of the industry hasn't worked out how to test database backups regularly. – I Will Fucking Piledrive You If You Mention AI Again

I Will Fucking Piledrive You If You Mention AI Again

Maybe it's not just AI but the cloud in general?

That chart shows worldwide data center energy usage growing at a remarkably steady pace from about 100 TWh in 2012 to around 350 TWh in 2024. The vast majority of that energy usage growth came before 2022, when the launch of tools like Dall-E and ChatGPT largely set off the industry's current mania for generative AI. If you squint at Bloomberg's graph, you can almost see the growth in energy usage slowing down a bit since that momentous year for generative AI. – Taking a closer look at AI’s supposed energy apocalypse, by Kyle Orland, for Ars Technica

Taking a closer look at AI’s supposed energy apocalypse

Electricity usage use goes up and up: data centers use more electricity than most countries, only 16 nations consume more

Goldman Sachs (fuckers all, never forget):

The promise of generative AI technology to transform companies, industries, and societies is leading tech giants and beyond to spend an estimated ~$1tn on capex in coming years, including significant investments in data centers, chips, other AI infrastructure, and the power grid. But this spending has little to show for it so far. – Gen AI: too much spend, too little benefit?

Gen AI: too much spend, too little benefit?

Crash:

The veteran analyst argued that hallucinations—large language models’ (LLMs) tendency to invent facts, sources, and more—may prove a more intractable problem than initially anticipated, leading AI to have far fewer viable applications. … For investors, particularly those leaning into the AI enthusiasm, Ferguson warned that the excessive tech hype based on questionable promises is very similar to the period before the dot-com crash. – AI is effectively ‘useless’—and it’s created a ‘fake it till you make it’ bubble that could end in disaster, veteran market watcher warns, by Will Daniel, for yahoo! finance

AI is effectively ‘useless’—and it’s created a ‘fake it till you make it’ bubble that could end in disaster, veteran market watcher warns

Investors Are Suddenly Getting Very Concerned That AI Isn't Making Any Serious Money: "We sense that Wall Street is growing increasingly skeptical." – by Victor Tangermann, for Futurism

Investors Are Suddenly Getting Very Concerned That AI Isn't Making Any Serious Money

Destroying the online job market:

Rather than solving the problems raised by employers’ methods, however, the use of automated job-hunting only served to set off an AI arms race that has no obvious conclusion. ZipRecruiter’s quarterly New Hires Survey reported that in Q1 of this year, more than half of all applicants admitted using AI to assist their efforts. Hiring managers, flooded with more applications than ever before, took the next logical step of seeking out AI that can detect submissions forged by AI. Naturally, prospective employees responded by turning to AI that could defeat AI detectors. Employers moved on to AI that can conduct entire interviews. The applicants can cruise past this hurdle by using specialized AI assistants that provide souped-up answers to an interviewer’s questions in real time. Around and around we go, with no end in sight. – Everlasting jobstoppers: How an AI bot-war destroyed the online job market, by Joe Tauke, for Salon

Everlasting jobstoppers: How an AI bot-war destroyed the online job market

Block crawlers from crawling:

By blocking these crawlers, bandwidth for our downloaded files has decreased by 75% (~800GB/day to ~200GB/day). If all this traffic hit our origin servers, it would cost around $50/day, or $1,500/month, along with the increased load on our servers. – AI crawlers need to be more respectful , by Eric Holscher, for Read the Docs

AI crawlers need to be more respectful

@malwaretech@infosec.exchange recently posted about expectations:

The whole AI thing has me endlessly confused. Half the market is crashing because investors didn't see any signs of payoff in the quarterly earnings report, but I'm so lost as to what exactly they were expecting to see. Did they just not pay any attention at all to what these companies were actually doing with AI?
Were they expecting exponential Instagram usage growth as a result of Meta making it so you can have a conversation with the search bar? Or maybe everyone was going to buy 10 new Windows licenses in celebration of Microsoft announcing they want to install AI powered spyware on everyone's computer? Or was Google going to sell more ads by replacing all the search results with Reddit shitposts?

The whole AI thing

@baldur@toot.cafe writes, living in Iceland:

However, datacentres in Iceland are almost exclusively used for "AI" or crypto. You can't buy regular hosting in these centres for love or money. If you buy hosting in Iceland, odds are that the rack is in an office building in Reykjavík somewhere, not a data centre.
And those data centres use more power than Icelandic households combined.
But, instead, the plan is currently to destroy big parts of places like Þjórsárdalur valley, one of the most green and vibrant ecosystems in Iceland.

Language data and slop:

The wordfreq data is a snapshot of language that could be found in various online sources up through 2021. There are several reasons why it will not be updated anymore. Generative AI has polluted the data. I don't think anyone has reliable information about post-2021 language usage by humans. – Why wordfreq will not be updated

Why wordfreq will not be updated

As noted by @baldur@toot.cafe: „feeling productive is not the same as being productive.“ For example:

Many developers say AI coding assistants make them more productive, but a recent study set forth to measure their output and found no significant gains. Use of GitHub Copilot also introduced 41% more bugs, according to the study from Uplevel, a company providing insights from coding and collaboration data. – Devs gaining little (if anything) from AI coding assistants

developers say

recent study

Devs gaining little (if anything) from AI coding assistants

No formal reasoning:

Our findings reveal that LLMs exhibit noticeable variance when responding to different instantiations of the same question. Specifically, the performance of all models declines when only the numerical values in the question are altered in the GSM-Symbolic benchmark. Furthermore, we investigate the fragility of mathematical reasoning in these models and demonstrate that their performance significantly deteriorates as the number of clauses in a question increases. We hypothesize that this decline is due to the fact that current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data. When we add a single clause that appears relevant to the question, we observe significant performance drops (up to 65%) across all state-of-the-art models, even though the added clause does not contribute to the reasoning chain needed to reach the final answer. – GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models, by
Iman Mirzadeh, Keivan Alizadeh, Hooman Shahrokhi, Oncel Tuzel, Samy Bengio, Mehrdad Farajtabar, at Apple

GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models

Not wort it:

Either way, it’s clear that Microsoft’s Copilot Pro experiment hasn’t worked out. A $20 monthly subscription on top of the Microsoft 365 Personal or Home subscription was always a big ask, and when I tried the service earlier this year I didn’t think it was worth paying $20 a month for. – Microsoft is bundling its AI-powered Office features into Microsoft 365 subscriptions / Microsoft appears to be giving up on Copilot Pro in favor of bundling AI features into its Microsoft 365 consumer subscriptions., by Tom Warren, for The Verge

Microsoft is bundling its AI-powered Office features into Microsoft 365 subscriptions / Microsoft appears to be giving up on Copilot Pro in favor of bundling AI features into its Microsoft 365 consumer subscriptions.

Waste:

Just last year, a mere 2.6 thousand tons of electronics was discarded from AI-devoted technology. Considering the total amount of e-waste from technology in general is expected to rise by around a third to a whopping 82 million tonnes by 2030, it's clear AI is compounding an already serious problem. – Scientists Predict AI to Generate Millions of Tons of E-Waste, by Russell McLendon for Science Alert, about E-waste challenges of generative artificial intelligence, by Peng Wang, Ling-Yu Zhang, Asaf Tzachor & Wei-Qiang Chen, in Nature Computational Science.

Scientists Predict AI to Generate Millions of Tons of E-Waste

E-waste challenges of generative artificial intelligence

Students:

OpenAI has published “A Student’s Guide to Writing with ChatGPT”. In this article, I review their advice and offer counterpoints, as a university researcher and teacher. After addressing each of OpenAI’s 12 suggestions, I conclude by mentioning the ethical, cognitive and environmental issues that all students should be aware of before deciding to use or not use ChatGPT. – A Student’s Guide to Not Writing with ChatGPT

A Student’s Guide to Not Writing with ChatGPT

AI will cause a stock market crash:

Remember that nobody has yet worked out how to make an actual profit from AI. So what if — God forbid — number stops going up? There’s a plan for that: large data center holders will go public as soon as possible and dump on retail investors, who will be left holding the bag when the bubble deflates. A bursting AI bubble will take down the Nasdaq and large swathes of the tech sector, not to mention systemic levels of losses and possible bank failures. … We think there’s at least a year or two of money left. -- Pumping the AI bubble: a data center funding craze with ‘novel types of debt structures’

Pumping the AI bubble: a data center funding craze with ‘novel types of debt structures’