Showing all posts about artificial intelligence
AI information summaries eating away at Wikipedia reader base
29 October 2025
Just about every online publisher has experienced a decline in the number of people reading articles and information published on their websites. Search engines presently do such a good job of breaking down the main points of news reports, blog posts, and the like, that seekers of information are seldom reading the material at its source.
Online encyclopedia Wikipedia is no exception, and falls in visitors stand to threaten what is surely an invaluable resource, along with others such as Encyclopaedia Britannica.
What happens if we follow this shift in the way people obtain information to its absurd, yet logical, conclusion? If websites such as Wikipedia, Britannica, along with news sites, and many, many, others, are forced to close because no one visits them anymore, what is going to feed the search engine AI summaries we’ve become accustomed to?
In short, we’re going to see AI summaries eat the web, and then eat themselves. The onus here is on search engines, AKA answer engines, and whatever other services generate AI summaries, to use them more selectively, and wean information seekers off them.
Is that something anyone can see happening? No, I didn’t think so.
RELATED CONTENT
artificial intelligence, blogs, content production, technology, trends
ChatGPT Atlas browser: the greatest thing since tabs in Firefox
27 October 2025
Tabbed browsing, was, says OpenAI CEO Sam Altman, the last significant web browser innovation. Although tabbed browsing didn’t become common place until around 2002, the idea dates back to 1994, with the arrival of InternetWorks, a browser made by BookLink Technologies. Altman seems to be suggesting browsers have barely changed since the early days of the web.
He made the remark during his introduction to ChatGPT Altas, OpenAI’s new web browser, last week. His words made people take notice, but Altman doesn’t seem to know his onions. Atlas is not a web browser, it is an AI-powered aggregator of information, which may, or may not, be accurate.
So far, Atlas is only available on MacOs, meaning I’ve not had a chance to try the innovative “browser” out, but certain aspects of its functionality either baffled or alarmed me, as I watched the OpenAI video presentation. To make use of Atlas, we are required to type out commands or prompts, in strikingly similar fashion to ChatGPT.
That’s not typically how browsers are used, but as I say, Atlas doesn’t seem like a web browser to me anyway. Of more concern is the way Atlas can, potentially, access files on the local drive of your computer, or if you allow it, the contents of your email app. AI scrapers, including no doubt OpenAI’s, have been trawling my website for years probably, but that’s content in the public domain.
AI bots going through what’s in my email app, and doing whatever with it, including training LLMs is another matter entirely. But Atlas is an AI browser, so buyer beware, this is no normal web “browser” if it is even one in the first place. If people want to use it, that’s for them to decide.
What’s more unsettling though, are regular browsers, such as Firefox, morphing into AI-browsers. Mozilla, the manufacturer of Firefox, which I have been using for over twenty years, is not, it seems, introducing a new browser line, instead it is integrating AI features into an existing product.
This is not a good move, we’re all going to end up running clones of Atlas on our devices, whether we like it or not. If Mozilla wants to make an AI-powered browser, fine, but develop a separate product, and let users decide if they want to use it. Leave the original Firefox, whose early predecessor, Phoenix, shipped with those groundbreaking tabs Altman spoke of, back in 2002, as it is.
Somehow I cannot see any of that happening. Firefox is going to become an AI browser whether we like it or not. There is, however a way to opt out of Firefox’s AI functionality, as New Zealand/Aotearoa blogger fLaMEd Fury, has detailed.
RELATED CONTENT
artificial intelligence, browsers, technology, trends
Authors claim Salesforce used their novels to train AI agents
21 October 2025
American novelists Molly Tanzer and Jennifer Gilmore have launched legal action against Salesforce, accusing the San Francisco based software company of copyright infringement.
Tanzer and Gilmore allege Salesforce used thousands of novels, not just their work, without permission, to train AI agents.
Salesforce want to have their cake and eat it as well. After replacing several thousand workers with AI technologies, presumably saving the company large sums of money, Salesforce want to pay as little as possible to develop the AI agents that displaced the workers in the first place.
What part of any of this is reasonable?
RELATED CONTENT
artificial intelligence, books, copyright, novels, technology
Robotic self-driving vehicles a threat to gig-economy food delivery work
9 October 2025
Robocart, a US company, has been developing self-driving vehicles that have the capacity to deliver ten different customer orders in a single run. The service, which the company plans to launch in Austin, Texas, later this year, will see customers pay just three-dollars per delivery, pricing many people will find attractive.
But Chicago based cybersecurity and network infrastructure expert Nick Espinosa warns that such a service stands to eliminate the roles of many food delivery drivers (YouTube link), working on behalf of companies such as Uber Eats and Door Dash.
Earlier this year, I was hearing stories about Australian web and app developers taking on food delivery work, as AI apps are doing the work they used to, for a fraction of the cost. While many of these people will be able to re-skill and eventually find new work, what will they do in the meantime, if casual work begins drying up?
RELATED CONTENT
artificial intelligence, technology, trends, work
Comment spammers use AI in another assault on bloggers
2 October 2025
When I turned comments back on here a few months ago, after an absence of many years, I was amazed at how quickly spam comments began appearing. Good news travels fast it seems. A new outlet has appeared for us to post our drivel — quick — get over there. But because every comment made here is held back for approval, none of them ever see the light of day.
Of course I wasn’t really surprised at the speed at which the spam arrived. Nor the lack of genuine comments, though there have been a few. I re-enabled comments as a way to centralise my web presence back onto this website. I’m not the biggest fan of social media, centralised or decentralised, but not because I dislike it (well, not too much), rather social media is just too time consuming.
What did dumbfound me though was the empty-headed nature of the spam comments being left. Some were barely coherent, while others were literally single words made up of random letters. What blogger, in their right mind, is going to approve those sorts of comments? A time-poor blogger, or one not paying attention, I think might be the answer.
These senseless utterances aren’t offensive, so maybe they’ll, you know, just get approved. And with some websites allowing follow-up comments from the same person to be posted without moderation, the floodgates would be open. But I suspect few spam-commenters saw much of what they wrote ever approved. But now they have changed tactics, and are using AI to craft their foul fare.
A lot of the recent comment spam I’m seeing looks as if the writer has read the post they’re responding to, through the way a comment is worded. When I posted about Tim Berners-Lee a few days ago, a lot of comments similar to this began appearing:
Oh Timothy, your call to have AI development moved under the auspices of a global not-for-profit is just a little simplistic, don’t you think? Yes, you invented the internet and gave it to us for free, and for that I thank you from the bottom of my heart. But placing AI development in the hands of a non-commercial entity is asking an awful lot.redacted spammy link
At first pass, the comment seems genuine. I too thought Berners-Lee was being optimistic in the extreme by suggesting a global not-for-profit organisation oversee future development of AI, but I’d never call Berners-Lee naive. He knows what he’s saying, and the idea makes sense, though I can’t see it ever happening. But that’s another story.
A commenter though is entitled to their opinion. And it almost seemed like an actual point-of-view, but for the ridiculous inclusion of an embedded spam link. Without, notably, a space after the previous sentence. The writer seems switched-on, but their oddly deficient syntax betrays them. And then the question: why on earth embed a spam link within the comment?
Did they not see the field on the comment form that allows a URL to be included? It’s possible I might have missed the spam-link if they did that. Usually though, I look closely at the URL of a commenter’s website. But then going on to post numerous, slightly differently worded, variations of the same comment, from the same IP no less, somewhat gives the show away.
Even writing this article is helping train the AI spam-commenters though. What bloggers, who allow comments, are facing though are somewhat more sophisticated spammers, who are using AI to compose comment spam that look like the real deal.
And yes, I look forward to seeing the thoughts of the AI spam-commenters in response to this post.
RELATED CONTENT
artificial intelligence, blogs, technology, trends
Tim Berners-Lee: the web needs to return to its roots
30 September 2025
Tim Berners-Lee, inventor of the internet, writing for The Guardian:
I gave the world wide web away for free because I thought that it would only work if it worked for everyone. Today, I believe that to be truer than ever. Regulation and global governance are technically feasible, but reliant on political willpower. If we are able to muster it, we have the chance to restore the web as a tool for collaboration, creativity and compassion across cultural borders. We can re-empower individuals, and take the web back. It’s not too late.
Berners-Lee also calls for AI research and development to be facilitated by a not-for-profit body, along the lines of CERN, the international organisation where Berners-Lee created the internet.
RELATED CONTENT
artificial intelligence, technology, Tim Berners-Lee
Microsoft to pay some publishers for content used by AI agents
27 September 2025
David Uzondu, writing for Neowin:
Microsoft is reportedly discussing with select US publishers a pilot program for its so-called Publisher Content Marketplace, a system that pays publishers for their content when it gets used by AI products, starting with its own Copilot assistant.
It’s a step in the right direction, but a lot hangs on the word select. The suggestion here is the majority of publishers, particularly smaller ones, including bloggers, will be excluded. Even if their content has been scrapped, and is being used in AI products.
RELATED CONTENT
artificial intelligence, publishing, technology
Answer engines: a new challenge for content writers, bloggers
25 September 2025
The biggest year-on-year declines were at Forbes (down 53% to 85.5 million visits — the steepest decline year on year for the second month in a row), Huffington Post (down 45% to 41.3 million), Business Insider (down 44% to 66.6 million), and News 18 (down 42% to 146.3 million). The Independent, CBC and Washington Post also closely followed with drops of 41% in year on year site visits.
Nearly all of the world’s top fifty English language websites have experienced declines in traffic, to greater or lesser degrees, in the last twelve months. Only one has bucked the trend, Substack, but I’m not sure that’s good news. But the reason for the sometimes sharp falls in visitors? AI overviews generated by many of the search engines, that’s what.
People searching for information online are increasingly satisfied with the AI generated summaries, that appear, as the first “result”, in response to a question they have. These overviews are created by drawing on webpages carrying relevant information, and spare search engines users from the need to visit said webpages.
It’s great for those looking for a quick answer to a query, provided of course the overview is accurate. It’s not so good for the people who wrote articles, or blog posts, that feed the AI generated overviews, as they no longer see a visit to their website. But this is the future of online search. Instead of search engines though, we will be using answer engines to source information.
In short, answer engines results will be similar to the AI overviews we see at present. Everything a searcher needs to know will be displayed in the result. There will be no need to visit individual webpages again.
From a content writer’s perspective, it can only be hoped answer engines will cite the sources used to concoct their response to a query. This for however many people who might still wish to verify the information provided by the answer engine, that is.
But not everyone writes content to be indexed by a search engine, and many actively prevent their websites from being looked at by the search engines. I get the feeling this may not be the case for answer engines though. Writers and bloggers are all too aware of AI scraper bots marauding their content, whether they like it or not, to train AI agents.
But going forward, this might be something content writers have to expect, accept even, it they want their work to be recognised. We can all see where this is going. The end of SEO, and the advent of — I don’t know — AEO, being Answer Engine Optimisation. Those wanting their content to be found by the answer engines are going to need to figure out how to optimise it thusly.
No doubt help will at be hand though. AEO experts and gurus will surely be among us soon, if they are not already. But that’s enough good news from me for one day.
RELATED CONTENT
artificial intelligence, blogs, content production, technology, trends
Online freelance marketplace Fiverr aspires to be an ‘AI-first’ company
20 September 2025
Fiverr plans to layoff one third of its workforce in a bid to become an AI-first enterprise, says CEO Micha Kaufman. By swapping out people for AI technologies, the company will become leaner and faster, according to Kaufman. Time will tell.
As of late last year, Fiverr employed some seven-hundred-and-sixty people, meaning about two-hundred-and-fifty jobs are on the line. Kaufman flagged the move earlier this year, when he warned AI was coming for everyone’s jobs, including his.
RELATED CONTENT
artificial intelligence, technology, trends
Death by a thousand cuts: the AI scraper indexing one blog post at a time
16 September 2025
Like many online-publishers/bloggers, I’ve experienced significant surges of traffic caused by AI bots indexing — or whatever they do — thousands of pages at a time on my website.
I’m in two minds as to whether or not to block this activity, but it seems pointless as many crawlers disregard disallow requests. Besides, I can’t stop other entities, human or otherwise, accessing the content here, and doing what they will with it.
Once, way back in 2000, someone in New Zealand copied the entirely of the then disassociated website, republished it under the name disenfranchised or something, and called it their own work. I didn’t discover the reproduction by chance though. The responsible party emailed to tell me about it.
I wrote back (effectively) saying they should design their own website. disenfranchised, or whatever it was, vanished a few weeks later. I think they hoped I would write ceaselessly about the “rip-off” of my work, but I when I said no more, they found something else to do.
I know there are ways to make copying the contents of a website difficult, but anyone sufficiently motivated will figure out how to bypass those mechanisms.
At least someone liked what I did enough to want to copy it. I highly doubt though any crawlers gathering data for AI agents care whether what I do here is likeable or not. But what annoys me is the way the activities of this scraper are distorting my web analytics (not Google) data.
Yes, you can help yourself to the content here, just don’t mess with my web stats.
Of course, I know web analytics are by no means an exacting science, but they do highlight trends. Somehow my morning online routine would not be the same if I decided to ditch analytics. Besides my stats app holds near on twenty-years worth of data, so there is also the history aspect.
To complicate matters, the scraper uses a different IP address on every single visit, meaning I can’t simply add an ignore tag to one IP, or a range, to keep visits off the analytics app data.
Subsequently, their visits appear to originate from a different town/city, but in the same country (a populous nation in east Asia). There is also no rhyme or reason to the maybe twenty to thirty pages they visit daily. One minute it is a years old post, the next something far more recent.
As the crawler did not snatch up several thousand post in one fell swoop, it will doubtless be active for sometime to come. In the meantime I’ll make the most of thinking my website is ever so slightly more popular than usual, since there’s not much else to do.
RELATED CONTENT
