Showing all posts about technology
A loophole for surviving the heat-death of the universe, or a noose?
3 October 2025
The people at Kurzgesagt are pretty clued-up. They must learn a lot, about everything really, in their line of work. As a result of this ceaseless learning, they might have found a way, for whatever lifeforms are still present, to evade the eventual heat-death of the universe.
Although still conjecture, this is how the universe might “end”, in trillions of years hence. Long after the last star has stopped shining, long after the last black hole has finally disintegrated.
Under this scenario, the universe won’t, or isn’t expected to, collapse in on itself. Seemingly the cosmos will continue expanding forever, as a dark, cold, void.
This, however, appears to the ideal environment for eternal life. In short, a civilisation Kurzgesagt calls the Noxans, will harvest vast amounts of energy from their galaxy, or what’s left of it. This will be stored in a massive battery bank, which the Noxans will draw off for untold trillions of years.
Untold trillions of years, but not forever. This near eternal life, however, won’t be living as we know it.
The temperature in the universe at this stage will be barely an iota above zero degrees on the Kelvin (K) scale. For reference water freezes at about two-hundred-and-seventy degrees on the Kelvin scale. Zero degrees K, or absolute zero, will be pretty cold. Too cold to even play ice-hockey.
But the Noxans will not be particularly active. Their digital avatars, which is all that will remain of them, will spend their waking hours engaged only in thought.
They will need to slumber to conserve resources. But this off-time will aid in cooling them down further, in turn reducing their power needs, in turn extending the life of their batteries. Didn’t the Noxans do well, surviving trillions upon trillions of years after the universe’s heat-death?
Kurzgesagt calls their method a loophole, but it seems more like a noose to me.
I’m curious as to what sort of material the battery banks, and whatever structure the Noxans will “reside” in, are made of. How will these endure for eternity without repair or replacement?
But sitting around in an ice-box until the battery goes flat doesn’t seem like fun. There has to be a better way for a civilisation to live forever. And maybe there is.
The Noxans, it should be pointed out, are what’s called a Type III civilisation on the Kardashev scale. This means they’re able to harness all the energy within a galaxy.
In comparison, Type I civilisations control all the energy on their planet, Type II their solar system. Humanity might be considered a zero-point-seven civilisation. But when Nikolai Kardashev, a Soviet astronomer, draw up his scale in 1964, he did not venture beyond Type III.
Other people though, including Hungarian academic Zoltan Galántai, speculate the existence of Type IIII, and even V civilisations, may be possible.
A Type IIII civilisation would have all the energy of the universe at its disposal. Type V entities meanwhile could probably create a whole new universe in which to live. This seems like a better plan for the Noxans. If they’ve made it as far up the scale as III, they could push on higher.
Reaching the ultimate top level, in this case V, would be a challenge, as I’m sure any gamer could tell you. But if the Noxans start now, with potentially many, many, trillions of years in front of them, I’m sure they could do it.
Eventually freezing to death in a glorified refrigerator seems like an absurd idea in comparison.
RELATED CONTENT
Comment spammers use AI in another assault on bloggers
2 October 2025
When I turned comments back on here a few months ago, after an absence of many years, I was amazed at how quickly spam comments began appearing. Good news travels fast it seems. A new outlet has appeared for us to post our drivel — quick — get over there. But because every comment made here is held back for approval, none of them ever see the light of day.
Of course I wasn’t really surprised at the speed at which the spam arrived. Nor the lack of genuine comments, though there have been a few. I re-enabled comments as a way to centralise my web presence back onto this website. I’m not the biggest fan of social media, centralised or decentralised, but not because I dislike it (well, not too much), rather social media is just too time consuming.
What did dumbfound me though was the empty-headed nature of the spam comments being left. Some were barely coherent, while others were literally single words made up of random letters. What blogger, in their right mind, is going to approve those sorts of comments? A time-poor blogger, or one not paying attention, I think might be the answer.
These senseless utterances aren’t offensive, so maybe they’ll, you know, just get approved. And with some websites allowing follow-up comments from the same person to be posted without moderation, the floodgates would be open. But I suspect few spam-commenters saw much of what they wrote ever approved. But now they have changed tactics, and are using AI to craft their foul fare.
A lot of the recent comment spam I’m seeing looks as if the writer has read the post they’re responding to, through the way a comment is worded. When I posted about Tim Berners-Lee a few days ago, a lot of comments similar to this began appearing:
Oh Timothy, your call to have AI development moved under the auspices of a global not-for-profit is just a little simplistic, don’t you think? Yes, you invented the internet and gave it to us for free, and for that I thank you from the bottom of my heart. But placing AI development in the hands of a non-commercial entity is asking an awful lot.redacted spammy link
At first pass, the comment seems genuine. I too thought Berners-Lee was being optimistic in the extreme by suggesting a global not-for-profit organisation oversee future development of AI, but I’d never call Berners-Lee naive. He knows what he’s saying, and the idea makes sense, though I can’t see it ever happening. But that’s another story.
A commenter though is entitled to their opinion. And it almost seemed like an actual point-of-view, but for the ridiculous inclusion of an embedded spam link. Without, notably, a space after the previous sentence. The writer seems switched-on, but their oddly deficient syntax betrays them. And then the question: why on earth embed a spam link within the comment?
Did they not see the field on the comment form that allows a URL to be included? It’s possible I might have missed the spam-link if they did that. Usually though, I look closely at the URL of a commenter’s website. But then going on to post numerous, slightly differently worded, variations of the same comment, from the same IP no less, somewhat gives the show away.
Even writing this article is helping train the AI spam-commenters though. What bloggers, who allow comments, are facing though are somewhat more sophisticated spammers, who are using AI to compose comment spam that look like the real deal.
And yes, I look forward to seeing the thoughts of the AI spam-commenters in response to this post.
RELATED CONTENT
artificial intelligence, blogs, technology, trends
Tim Berners-Lee: the web needs to return to its roots
30 September 2025
Tim Berners-Lee, inventor of the internet, writing for The Guardian:
I gave the world wide web away for free because I thought that it would only work if it worked for everyone. Today, I believe that to be truer than ever. Regulation and global governance are technically feasible, but reliant on political willpower. If we are able to muster it, we have the chance to restore the web as a tool for collaboration, creativity and compassion across cultural borders. We can re-empower individuals, and take the web back. It’s not too late.
Berners-Lee also calls for AI research and development to be facilitated by a not-for-profit body, along the lines of CERN, the international organisation where Berners-Lee created the internet.
RELATED CONTENT
artificial intelligence, technology, Tim Berners-Lee
Microsoft to pay some publishers for content used by AI agents
27 September 2025
David Uzondu, writing for Neowin:
Microsoft is reportedly discussing with select US publishers a pilot program for its so-called Publisher Content Marketplace, a system that pays publishers for their content when it gets used by AI products, starting with its own Copilot assistant.
It’s a step in the right direction, but a lot hangs on the word select. The suggestion here is the majority of publishers, particularly smaller ones, including bloggers, will be excluded. Even if their content has been scrapped, and is being used in AI products.
RELATED CONTENT
artificial intelligence, publishing, technology
Answer engines: a new challenge for content writers, bloggers
25 September 2025
The biggest year-on-year declines were at Forbes (down 53% to 85.5 million visits — the steepest decline year on year for the second month in a row), Huffington Post (down 45% to 41.3 million), Business Insider (down 44% to 66.6 million), and News 18 (down 42% to 146.3 million). The Independent, CBC and Washington Post also closely followed with drops of 41% in year on year site visits.
Nearly all of the world’s top fifty English language websites have experienced declines in traffic, to greater or lesser degrees, in the last twelve months. Only one has bucked the trend, Substack, but I’m not sure that’s good news. But the reason for the sometimes sharp falls in visitors? AI overviews generated by many of the search engines, that’s what.
People searching for information online are increasingly satisfied with the AI generated summaries, that appear, as the first “result”, in response to a question they have. These overviews are created by drawing on webpages carrying relevant information, and spare search engines users from the need to visit said webpages.
It’s great for those looking for a quick answer to a query, provided of course the overview is accurate. It’s not so good for the people who wrote articles, or blog posts, that feed the AI generated overviews, as they no longer see a visit to their website. But this is the future of online search. Instead of search engines though, we will be using answer engines to source information.
In short, answer engines results will be similar to the AI overviews we see at present. Everything a searcher needs to know will be displayed in the result. There will be no need to visit individual webpages again.
From a content writer’s perspective, it can only be hoped answer engines will cite the sources used to concoct their response to a query. This for however many people who might still wish to verify the information provided by the answer engine, that is.
But not everyone writes content to be indexed by a search engine, and many actively prevent their websites from being looked at by the search engines. I get the feeling this may not be the case for answer engines though. Writers and bloggers are all too aware of AI scraper bots marauding their content, whether they like it or not, to train AI agents.
But going forward, this might be something content writers have to expect, accept even, it they want their work to be recognised. We can all see where this is going. The end of SEO, and the advent of — I don’t know — AEO, being Answer Engine Optimisation. Those wanting their content to be found by the answer engines are going to need to figure out how to optimise it thusly.
No doubt help will at be hand though. AEO experts and gurus will surely be among us soon, if they are not already. But that’s enough good news from me for one day.
RELATED CONTENT
artificial intelligence, blogs, content production, technology, trends
Download your TypePad blog and post the content to a new website
24 September 2025
Phil Gifford, who gave us ooh.directory, has published a method for downloading a TypePad blog, and uploading the files to a another server, should you so wish, so your TypePad blog can live on under a new guise. Don’t forget, TypePad closes at the end of September, so you need to act quickly if you want to retrieve your blog.
RELATED CONTENT
Subscribe Openly, and (almost) one-click RSS feed subscriptions
23 September 2025
In an ideal world subscribing to a website/blog’s RSS feed should be as simple as following a page on the socials. Simply click the follow button, and that’s it. In the case of, say, Instagram all future posts of whoever you started following will be visible — algorithms permitting — in the main/home feed.
Of course, subscribing to a RSS feed isn’t difficult. If you know what you’re doing. But to those to don’t know much about RSS, clicking the subscribe button might result in confusion and frustration, and see them abandoning the process all together.
Sometimes clicking the subscribe button might only open the URL of the RSS feed, leaving a budding subscriber wondering what to do next. “Am I meant to bookmark this link?” they might wonder.
But before we ask people to subscribe to a RSS feed, we need them to understand they first need a RSS reader. A RSS reader is an app that allows people to subscribe to, and read, RSS feeds. But to the uninitiated, the process of installing a RSS reader might present another confusing hurdle, only further complicating matters.
Subscribe Openly, however, created by James, is a step in the right direction.
Instead of presenting a would-be RSS subscriber with a screen filled with the raw data of a RSS feed, when they click on the subscribe button, they are presented with a list of RSS readers they can install. Here’s what you’d see if you were subscribing to the RSS feed for my website this way.
Next it needs to be made understood to prospective RSS subscribers that setting up a reader app is not that difficult. They doubtless have numerous apps on their device already, a RSS reader would simply be just another app they need to install. Let’s get to it.
Perhaps though styling feeds so they’re coherent in a web browser is something publishers who syndicate content to RSS should consider. Having a RSS feed that renders like a webpage — that could be bookmarked like any other website — does of course seem like it defeats the purpose of having a RSS feed.
But, if people new to RSS see a coherent looking webpage when clicking the URL of a RSS feed, they might have more incentive to find out more about RSS.
RELATED CONTENT
blogs, content production, RSS, syndication, technology
Online freelance marketplace Fiverr aspires to be an ‘AI-first’ company
20 September 2025
Fiverr plans to layoff one third of its workforce in a bid to become an AI-first enterprise, says CEO Micha Kaufman. By swapping out people for AI technologies, the company will become leaner and faster, according to Kaufman. Time will tell.
As of late last year, Fiverr employed some seven-hundred-and-sixty people, meaning about two-hundred-and-fifty jobs are on the line. Kaufman flagged the move earlier this year, when he warned AI was coming for everyone’s jobs, including his.
RELATED CONTENT
artificial intelligence, technology, trends
Australian social media age verification laws: you might need to prove your age
20 September 2025
The Australian government has issued guidelines regarding proposed age verification regulations that come into effect this December.
While social networks will be required to “detect and deactivate or remove” the accounts of members under the age of sixteen, they will not need to verify the age of every last user. This would no doubt apply to instances where someone has been a long-time user of a social media channel, or it is apparent they are over the age of sixteen.
It sounds reassuring, at least on the surface, but the devil will be in the detail. It will be down to individual platforms to decide how they go about ascertaining a member’s age, rather than there being a standard, universal, process they must adhere to. Expect to see some under-sixteens fall through cracks, while a few over-sixteens get caught in the net.
RELATED CONTENT
Australia, social media, social networks, technology
Death by a thousand cuts: the AI scraper indexing one blog post at a time
16 September 2025
Like many online-publishers/bloggers, I’ve experienced significant surges of traffic caused by AI bots indexing — or whatever they do — thousands of pages at a time on my website.
I’m in two minds as to whether or not to block this activity, but it seems pointless as many crawlers disregard disallow requests. Besides, I can’t stop other entities, human or otherwise, accessing the content here, and doing what they will with it.
Once, way back in 2000, someone in New Zealand copied the entirely of the then disassociated website, republished it under the name disenfranchised or something, and called it their own work. I didn’t discover the reproduction by chance though. The responsible party emailed to tell me about it.
I wrote back (effectively) saying they should design their own website. disenfranchised, or whatever it was, vanished a few weeks later. I think they hoped I would write ceaselessly about the “rip-off” of my work, but I when I said no more, they found something else to do.
I know there are ways to make copying the contents of a website difficult, but anyone sufficiently motivated will figure out how to bypass those mechanisms.
At least someone liked what I did enough to want to copy it. I highly doubt though any crawlers gathering data for AI agents care whether what I do here is likeable or not. But what annoys me is the way the activities of this scraper are distorting my web analytics (not Google) data.
Yes, you can help yourself to the content here, just don’t mess with my web stats.
Of course, I know web analytics are by no means an exacting science, but they do highlight trends. Somehow my morning online routine would not be the same if I decided to ditch analytics. Besides my stats app holds near on twenty-years worth of data, so there is also the history aspect.
To complicate matters, the scraper uses a different IP address on every single visit, meaning I can’t simply add an ignore tag to one IP, or a range, to keep visits off the analytics app data.
Subsequently, their visits appear to originate from a different town/city, but in the same country (a populous nation in east Asia). There is also no rhyme or reason to the maybe twenty to thirty pages they visit daily. One minute it is a years old post, the next something far more recent.
As the crawler did not snatch up several thousand post in one fell swoop, it will doubtless be active for sometime to come. In the meantime I’ll make the most of thinking my website is ever so slightly more popular than usual, since there’s not much else to do.
RELATED CONTENT
