AI in publishing

AI in publishing

Unsurprisingly, AI was a top theme at this year’s London Book Fair, with several talks on the subject.

There is an understandable nervousness around the development of large language models (LLMs) and artificial intelligence; a fear that they will replace the need for creatives, artists, designers and writers.

There’s also anger at the perceived “theft” of copy which is scraped from the internet, leaving writers and creatives without remuneration for the use of their copyrighted text.

I attended two talks on the subject at the book fair, and here report some of the comments and observations made.

Copyright and AI: A Global Discussion of Machines, Humans, and the Law

Chaired by Porter Anderson, editor-in-chief Publishing Perspectives; with Glenn Rollans, president and publisher Brush Education Inc; Nicola Solomon, CEO Society of Authors; Maria A Pallante, president & CEO Association of American Publishers; Dan Conway, CEO The Publishers Association UK

Dan Conway

We need to see fair remuneration, author crediting, and publishers able to withhold rights for AI to use authors’ works.

Copyright and AI have to work together. Copyright is a prerequisite for AI – there has to be acknowledgment that there is copyrighted material and it has to be paid for.

We’re seeing an impasse in parliament [between different departments] about what we need to do.

 

Porter Anderson

Everyone can see the danger of AI but is also interested in what it can do for us. There’s a tension inherent in the whole discussion.

Courts in Canada on the side of AI are saying they don’t need to pay copyright to creators.

We have to get past the fears and work with them, engage with them and make it work the right way.

[Porter did not give any source for the comment on Canadian court decisions in AI, but I include this link which has details of the Canadian Government’s “Consultation on Copyright in the Age of Generative Artificial Intelligence”. There are many instances of the suggestion that using copyrighted material in text and data mining to train AI models can be considered “fair use”. Hmmm.]

 

Maria A Pallante

AI [organisations] agree governments need to regulate them but won’t volunteer how. Copyright drives progress by protecting creators. It drives technological progress.

We have more than two dozen copyright and AI cases pending in the US, including Music publishers vs Anthropic in Nashville.

Big tech claims it’s using AI in the public interest.

AI is willing to spend millions on the tech but not on the authors providing content.

They’re so innovative but they can’t create a digital licensing scheme?

[Maria recommended a book on the subject, Our Final Invention by James Barrat. She is also an adviser to the non-profit, Fairly Trained, which licenses generative AI models that don’t use copyrighted work.]

 

Nicola Solomon

Who uses the machines, who benefits from the use, and who gets the money?

How do we discover human works when it’s drowned out by machine-made material?

Work needs to be labelled if it’s been AI generated. But Amazon is not providing a filter to filter out AI-generated material

If there are found to be benefits from our created work, we need to be making money from it, if only to keep money in local economies.

We’re very happy to work with licensing to bring the work that machines need to create AI material.

We want to engage but it’s big tech that says licensing is an obstacle.

 

Glenn Rollans

Publishing is notoriously hidebound. When ChatGPT launched we were suddenly wakened to an idea or a threat that’s been in the works since 2015.

We missed how aggressive new tech would move into markets. It’s all moving at a breakneck pace.

We’re in the midst of aggressive change in the information marketplace.

Products are being built on our products without attribution, remuneration or transparency. There’s an injustice there.

Transforming the Future of the Written Word with Artificial Intelligence (AI)

Chaired by Kat Brown, journalist & author; with Rebecka Isaksson, knowledge strategist KnowFlow Value; Katie King, author & CEO AI In Business; Dr Kate Devlin, reader in artificial intelligence & society King’s College London; Bill Thompson, principal research engineer in the advisory team BBC Research & Development

Kate Devlin

AI has been said to be the end of humanity and it’s doom laden but that’s not true.

AI is a large language tech built on copyrighted material without compensation.

AI is not designed to produce facts. They [users] expect the content to be factually accurate but they’re [LLMs] not designed to be verifiable and true, they’re designed to be convincing.

We always need to evaluate the source.

We’re already using AI – Google maps, for example.

Government is favouring a small number of tech boys who are contributing less to the economy than creatives. They’re throwing us under the bus.

Who thought copyright law would be the next big thing in 2024?

We don’t have to be fatalist; we can centre humans and human values. We should be putting humans at the centre.

I’m not convinced that AI can be inclusive or give voice to the marginals.

 

Katie King

Bloggers and journalists can also say the untrue.

Who owns content – the writer or the AI model that turns it into something different? Can AI own content?

Can computers systems have patents? Legal jurisdictions in different territories will apply law differently.

Can AI developers be content owners?

We need to use these tools to help us, to make things diverse and inclusive but the IP and patents have to be dealt with.

We use Otter to upload transcripts, Firefly, Phrases, Jasper, Midjourney… many many tools to help with analysis. The issue is around privacy. We need to get something back if we’re giving details about us.

We have to learn how to use the tools and keep pushing back on the tools.

Will people want it? There’ll be people buying these products and publishers will cope without using these tools but there are huge productivity gains to be made.

AI is augmented intelligence. When we have data we have insights, and that should be used to our advantage.

 

Rebecka Isaksson

AI voices in audio will avoid the biases of voices being associated with certain humans. [This was a standout point for me and one I hadn’t thought of. When you listen to certain voices or accents, you as a listener come to the material with preconceived ideas and biases. An AI voice is… neutral, and so, potentially, is the listener.]

ChatGPT is the biggest and best marketing scam. It has created a massive demand from the market and that’s all it’s done. I use OpenAI as a collaborative partner to challenge my own truths. I asked Copilot to read my abstract as though it were a 26-year-old developer – what would its perspective be? I use it to challenge myself. It helps me learn and grow.

People ask, “Is AI going to make us dumber?”

Authors are trying to write books using AI and they’re going to fail miserably. When we have these unbiased, neutral [voices] with no nuance or emotion there won’t be imperfections which stimulate as you read. They’ll be flat and boring and self sanitising. There’ll be a bump on the road [getting used to the new technology] and then it’ll disappear.

The technology enables an up-and-coming, unknown author to translate an excerpt quickly and you can send it out to multiple publishers. There are ways it can help new voices.

 

Bill Thompson

There is a breadth of tech under AI that we’re using all the time. It’s becoming part of the computational infrastructure of our lives.

The conscious decision to connect to the internet feels ridiculous now. Dial-up modems!

These things are being oversold. They are not godlike or omnipotent. There will always have to be a sensible human in the loop.

My Facebook timeline is the biggest failure in AI, to give us pause and hope. I don’t see my friends’ posts – I see ads for things I don’t want.

The machine will always be there and won’t get tired giving you feedback on your chapters and character development.

I’m interested to see how AI changes how people write.

Writers won’t be changed but publishing will be transformed. It’s going to be chaos. The way you market and distribute will be shaken up.

We might enable the Netflix-ication of creating new books. An AI algorithm will tell publishers what books will sell.

My take on AI

Personally, I’m very much sitting on the fence when it comes to AI. There is always fear around new tech, originating with the Luddites in the 19th century. As Steven Bartlett said (also at the book fair), our instinct, when we are faced with the “bizarre”, is to turn away and refuse to accept it. We see it as a threat and a danger.

But, Steven said: “You have to lean in when things feel bizarre and turn threats into opportunities.”

AI seems a threat but there is huge opportunity in its use as a tool. It is a tool that can and should be used to speed up processes and ease labour loads.

But, as with all new tools developed by humankind, there will be a bedding-in period when we’re so giddy with excitement at this novel development, we overlook its flaws and misuse it, not looking past its ideals to see the harm it could also be causing.

If the tech giants behind AI actually sit down with representatives of creatives to hammer out the positive and negative impacts, they could devise some sort of agreement that balances out the financial impact for everyone.

At the moment, it all looks too much like a dark art, and there are too many messages of fear clouding the full truth of AI’s potential. These clouds aren’t being cleared by the AI giants themselves.

There needs to be a transparent and inclusive conversation between all stakeholders, to inform, educate, compromise and please all parties.

As a creative myself, I’m furious at the very thought that works would be scraped from the internet to train AI without the creators being remunerated. The western world is based on capitalism and, in a capitalist society, this sort of activity is theft, pure and simple.

*But* – is it that simple? Are sentences, passages and paragraphs literally being plagiarised? Or are the LLMs just learning from patterns? I mean, all writers learn from patterns, don’t they? The haiku. The sonnet. Freytag’s Pyramid. The “save the cat” beat sheet. They’re all frameworks we use to build our own creative works round. If LLMs are doing that, I don’t see the harm. But we don’t know if that’s all that’s happening.

Artists are certainly seeing their work lifted and shared without credit or payment. It’s reasonable to be annoyed by this. It’s theft and it’s also fraudulent.

I’d like to see the AI giants call a halt to technological developments – or at least slow them down – while we all catch up morally. Sometimes I think that’s the biggest problem with new tech. It’s launched into the marketplace and takes far longer for our complex human minds to catch up with. There are philosophical and moral questions to be asked; a debate on the financial implications for every stakeholder. There are boxes to be ticked and consultations to complete.

And there are certainly far more honest and transparent conversations to be had between AI developers and governments. Creatives shouldn’t be thrown under the bus all for the sake of the emperor’s new clothes.

Book links take you to bookshop.org, where you can buy these books, which will support independent bookshops and earn me a small affiliate commission. Thank you.

Share the Post:

Previous Posts

By using this website, you agree to our use of cookies. Read more about cookies in our Privacy Policy