Showing all posts about artificial intelligence

The Jony Ive/OpenAI device, a limited function, screenless, smartphone?

29 May 2025

Ming-Chi Kuo, an analyst at Hong Kong based TF International Securities, spills the tea, perhaps, about the upcoming “futuristic AI device” being designed collaboratively by former Apple CDO Jony Ive, and OpenAI.

According to Kuo (X/Twitter link), the device is intended to be worn around the neck. A bit like a lanyard maybe. It will be a little bigger than the erstwhile Humane AI Pin, will have cameras and microphones, but no display screen.

The device however will connect to smartphones and computers, and use their screens, and, by the sounds of things, tap into their computing capabilities also.

This detail intrigues me. Given the Ive/OpenAI device is intended to be “a product that uses AI to create a computing experience that is less socially disruptive than the iPhone”, doesn’t deriving much, or some, functionality from an iPhone (or other smartphone), defeat the purpose?

Otherwise the device sounds like a lite version of a smartphone, that you could keep on your side table overnight. It can still make and pickup phone calls, act as an alarm clock, and offer information in response to voice prompts.

Things like: “what’s the weather forecast?” or: “what’s making news headlines this morning?” It may be possibly be a device that keeps us connected to the outside world, but prevents social media doomscrolling in the middle of the night.

That might be something people will find useful. We’ll have to wait and see what is actually shipped.

RELATED CONTENT

, , , ,

AI 2027: an artificial intelligence future that’s only two years away?

28 May 2025

A speculative essay on the (perhaps) faster than anticipated rise of a superhuman, superintelligent AI, by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean. It’s a long, possibly unsettling read, but well worth it.

The CEOs of OpenAI, Google DeepMind, and Anthropic have all predicted that AGI will arrive within the next 5 years. Sam Altman has said OpenAI is setting its sights on “superintelligence in the true sense of the word” and the “glorious future.” What might that look like? We wrote AI 2027 to answer that question. Claims about the future are often frustratingly vague, so we tried to be as concrete and quantitative as possible, even though this means depicting one of many possible futures.

Artificial General Intelligence (AGI) mimics all the cognitive activities of the human brain, while AI can perform tasks that require human intelligence. I’m thinking HAL, the human-like computer in the 1968 film 2001: A Space Odyssey might be an example of AGI, while ChatGPT or Claude are AI bots.

There are some people who think AGI will never arrive, but an almost superintelligent AI could still be as menacing as some fear:

A week before release, OpenBrain gave Agent-3-mini to a set of external evaluators for safety testing. Preliminary results suggest that it’s extremely dangerous. A third-party evaluator finetunes it on publicly available biological weapons data and sets it to provide detailed instructions for human amateurs designing a bioweapon — it looks to be scarily effective at doing so. If the model weights fell into terrorist hands, the government believes there is a significant chance it could succeed at destroying civilization.

Doesn’t sound like much of a “glorious future” to me.

RELATED CONTENT

, ,

AI will take the work it wants to do, leave the rest for people

28 May 2025

Is AI going to take work away from people? It’s a question on the minds of many. Dror Poleg argues AI bots will only be interested in certain “high level” tasks, leaving plenty of work for us:

One might argue that even if we have superhuman software, older software or weaker AI models could still perform trivial tasks cheaply. But this misses the crucial point of opportunity cost: any marginal unit of energy that could tip the scales in finance or warfare would always be too valuable to waste on trivial tasks. As long as energy and computing resources determine competitive outcomes, there will always be something better to do with them than waste them on tasks humans can handle.

The question here though, what sort of work will be left for people? Tasks we want to do, or are forced to do, as we’ll have no choice?

RELATED CONTENT

, ,

Book bloopers: when authors AI prompts are published in their novels

26 May 2025

Matthew Gault, writing for 404 Media:

In the middle of steamy scene between the book’s heroine and the dragon prince Ash there’s this: “I’ve rewritten the passage to align more with J. Bree’s style, which features more tension, gritty undertones, and raw emotional subtext beneath the supernatural elements:”

The excerpt is said to be found in chapter three of Lena McDonald’s novel Darkhollow Academy: Year 2, although apparently it has since been removed from later editions of the book.

If you must use AI, especially in fiction work, remember the rules, whereby the first rule of using AI to write a novel, is not to be caught using AI.

For those wondering about the J. Bree reference, J Bree is a West Australian based author of fantasy and dark romance novels. The incident also indicates that Bree’s work has been appropriated by AI models, most likely without her prior knowledge, or approval.

RELATED CONTENT

, , , ,

Jony Ive and Sam Altman announce collaboration in video lovefest

24 May 2025

Jony Ive, former Chief Design Officer at Apple, founded LoveFrom in 2019, when he left Apple, with Australian designer Marc Newson. In 2024, Ive established io, as a vehicle to move into the AI space.

A few days ago we learned Ive is joining forces with OpenAI founder Sam Altman, and io will merge with OpenAI. You take the last letter of OpenAI, pair it with the first, and you get io, right? The merger however sounds like the tech/design collaboration made in heaven.

No clues have been offered as to what can be expected of this coming together, other than an AI device of some sort. According to a Wired article published last September, it will be “a product that uses AI to create a computing experience that is less socially disruptive than the iPhone“.

If you haven’t see the video announcing Ive and Altman’s partnership, and have a spare nine minutes, take a look. What a beautiful tech bro bromance we have going on here.

RELATED CONTENT

, , , ,

Have AI chatbots killed off question and answer website Stack Overflow?

22 May 2025

Activity at question and answer website Stack Overflow is at an all-time low, according to a recent article in The Pragmatic Engineer. Question levels are presently similar to what they were in 2008, the year Stack Overflow launched. Although the decline in use could be attributed to a number of factors, AI appears to be the main culprit. If people have a coding or application development question, it seems they are now going to ask a chatbot for the answer.

A graph charting question activity on Stack Overflow, compiled by Marc Gravell, makes for compelling viewing. Activity reached an all time high in 2014, but slowly began falling away.

Maybe this could be ascribed to the presence of competitors in the question and answer space, such as Quora, GitHub, and of course Reddit. Aside though from a surge of activity in 2020, when more people were working from home as a result of COVID lockdowns, and unable to brainstorm solutions to problems in the workplace, use of Stack Overflow has been declining ever since.

Some people seem to be suggesting the website may close. I’m hoping it doesn’t come to that. Stack Overflow has been a great help to me over the years, and is just about the first place I turn to when I have a website or coding question. Almost without variation, someone else has had the exact same difficulty, and I have just about always found a tried and true solution.

I’ve tried using AI for some code-related queries I have, but so far the suggestions made there are not as sound, or are simply no use at all. Hang in there Stack Overflow.

RELATED CONTENT

, ,

Should cook book writers sue each other for plagiarism or AI chatbots?

22 May 2025

Malcolm Knox, writing for The Sydney Morning Herald, regarding accusations of plagiarism made by Sydney based Australian cook Nagi Maehashi against Brisbane counterpart Brooke Bellamy:

Nagi and Brooke will be out of their jobs when Microsoft, Google, Meta and the rest of big tech develop AIs to deliver the same caramel slice recipe, at zero cost, provided by an “author” whose personality combines the best of Julia Child, Margaret Fulton, Yotam Ottolenghi, even Nagi and Brooke.

Knox has a point. Perhaps the cooks should be more concerned about the mass appropriation of copyrighted material, without permission or recompense, rather than the alleged wrongdoing of one person, which may be near nigh impossible to prove. Not that the odds of prevailing against big tech would be any better.

I write this in the wake of another AI chatbot surge of activity on this website a few nights ago. Several hundred posts were presumably indexed in a matter of minutes, in the name of machine learning. Sometimes if something I posted here has been used as the basis for a question posed to an AI bot, a link to the source material is supplied with the answer generated.

At least I score a visit or two out of it all.

RELATED CONTENT

, , ,

Authors deeply divided over use of generative AI says BookBub

21 May 2025

United States based book discovery service BookBub recently asked twelve hundred writers about their thoughts on generative AI. Unsurprisingly, opinion was sharply divided, with an almost exactly half of respondents either against the technology, or in favour of it.

Overall, opinions among authors are deeply divided — many consider any use of generative AI unethical and irresponsible, while others find it a helpful tool to enhance their writing and business processes. Some authors remain conflicted, and are still negotiating their own feelings about the utility and morality of this technology.

It seems to me these findings sum up the way people in general, not just authors, see generative AI.

RELATED CONTENT

, , ,

Half of Australians use AI, but many mistrust, even fear it

17 May 2025

The findings come as part of a global study into the use of, and attitudes to artificial intelligence (AI), carried out by multinational professional services network KPMG, in conjunction with Professor Nicole Gillespie and Dr Steve Lockey, of the University of Melbourne.

“The public’s trust of AI technologies and their safe and secure use is central to acceptance and adoption,” Professor Gillespie says. “Yet our research reveals that 78% of Australians are concerned about a range of negative outcomes from the use of AI systems, and 37% have personally experienced or observed negative outcomes ranging from inaccuracy, misinformation and manipulation, deskilling, and loss of privacy or IP.”

While the benefits of AI use in the workplace are understood, many Australians harbour concerns the technology may result in job losses. These fears are justified to an extent however, and not only in Australia, with some freelance IT and creative professionals reporting declines in work availability, something that they are attributing to the prevalence of AI technology.

RELATED CONTENT

, ,

AI coding tools will make vibe code output a thing of the past

14 May 2025

Mark Sullivan, writing for Fast Company:

Google DeepMind research scientist Nikolay Savinov said in a recent interview that AI coding tools will soon support 10 million-token context windows — and eventually, 100 million. With that kind of memory, an AI tool could absorb vast amounts of human instruction and even analyze an entire company’s existing codebase for guidance on how to build and optimize new systems.

When a developer uses an AI technology to produce some code, but has no regard for the quality of the generated code, there you have vibe coding.

It might be bad code of the worst sort, but who cares? Not that particular developer. Future coding tools however will eventually — one day — be so proficient that all the code they create will be top notch. Bad code, and vibe coding, will be a thing of the past.

Or will it? The super-duper code these super-duper AI tools generate will be so good, no one will need to worry about its quality any more. That will be vibe coding, but an entirely different form of vibe coding. If you enjoyed the joke, you can start laughing now.

RELATED CONTENT

, ,