Alex Reisner studied a dataset used by Meta to train its large language model LLaMA and found that the data included content from pirated books. "The future promised by AI is written with stolen words," Reisner writes in The Atlantic. He adds that...
Upwards of 170,000 books, the majority published in the past 20 years, are in LLaMA’s training data. ... nonfiction by Michael Pollan, Rebecca Solnit, and Jon Krakauer is being used, as are thrillers by James Patterson and Stephen King and other fiction by George Saunders, Zadie Smith, and Junot Díaz." What's more, LLaMA isn't the only AI this dataset is training.
Legalities of the creation, usage and ownership of the dataset aside, large language models can now be prompted to write in the "voices" of certain authors, and sometimes, they do a pretty good job. How far can their imitation go if continually trained with more data? Will they be good enough to replace those authors entirely? A chilling possibility in an age of tech-enabled deepfakes and identity theft.
Swiftly written books, some with AI assistance, published in the wake of a major event is one way to make coin. But when it's about a tragedy, I think it's distasteful, as in the case of a book about the Maui wildfires.
According to Forbes, "The 44-page book, available as an e-book or paperback, claims to have been written by a Dr. Miles Stone, but the about-the-author section on the Amazon page simply reads, 'I'd rather not say,' and no such person seems to exist in the public record, according to a LexisNexis search."
Independent fact checking organisation Full Fact looked into this and debunks the idea that the fires were premeditated because "how else could these books have been published so fast?" Amazon Direct Kindle, hello?
Scammers cobbling material into books is a longtime grift but with AI, churning out such books is now easier. Despite shorter pages and fast production times, numerous volumes can rack up a tidy sum even if priced cheaply. And nobody seems to care whether real authors or experts are behind these books. No surprise if "Dr. Miles Stone" doesn't exist – you can't call out a phantom for plagiarism, bad takes, or misinformation.
Perhaps we should care. AI-written how-to books are also flooding the market, and given how it writes, misinformation can be deadly. Books on foraging – looking for edibles in the wild – have to be well researched because misidentifying species of plants and fungi can be fatal. And what if real authors, especially accredited experts, are named as the writers of such books? AI, impersonating humans and trying to terminate people through books? An interesting premise for a sci-fi novel, albeit a horrifying one.
Let's not forget how this avalanche of machine-generated dross drowns out the presence of properly researched and published books by people who care more than the average spammer.
Not everyone is wary of AI. Tech entrepreneur and writer Ajay Chowdhury doesn't seem worried about AI replacing writers, even as he uses it to help him write ... with a little caveat. "The utopia to me is people using AI to enhance their creativity," he tells Sky News. "The side that worries me is if large corporations start to think we don't need creatives any more."
Chowdhury isn't the only writer who's excited about having AI help. Several local authors and publishers seem cool with it. No doubt the technology can be useful. Writers who are disabled would benefit from having an AI-powered assistant, and not just for helping around the house.
However, some businesses have started ditching humans for AI to speed things up, cut costs, or both. AI may never fully replace human creativity and adaptability, but disruptive tech affects lives and companies chasing the bottom line will do what they can to save a few bucks. Governments, institutions and tech firms can pitch in to arrest the growth of AI, but it's too late to lock the barn doors.
Jamie Canaves at Book Riot thinks the conversation about AI shouldn't be about how good/bad it is or whether it will replace people – a distraction, she believes, from the real questions.
Who are developing and investing into this tech? What they want to do with it. Do these people care about how it's being used? Do they care about the impact it causes? Because if the makers and funders of these AI models aren't thinking about regulations and limits, somebody has to, or the misuse of this tech will hurt more than help.
AI is here and it's not going anywhere. It will be part of our lives whether we want to or not. We either adapt or fade away.
Categories:
Book Blab
0 comments:
Post a Comment
Got something to say? Great!