Image source, GoogleImage caption,
Google says Gemini will power text and image services
By Shiona McCallum and Zoe Kleinman
Google has released an artificial intelligence (AI) model which it claims has advanced "reasoning capabilities" to "think more carefully" when answering hard questions.
AI content generators are known to sometimes invent things, which developers call hallucinations.
Gemini was tested on its problem-solving and knowledge in 57 subject areas including maths and humanities.
Boss Sundar Pichai said it represented a "new era" for AI.
Google adopted a cautious approach to the launch of its AI chatbot, Bard, earlier this year, describing it as "an experiment".
Bard made a mistake in its own publicity demo, providing the wrong answer to a question about space.
But Google is making some big claims for its new model, describing it as its "most capable" yet and has suggested it can outperform human experts in a range of intelligence tests.
Gemini can both recognise and generate text, images and audio - but is not a product in its own right.
Instead it is what it known as a foundational model, meaning it will be integrated into Google's existing tools, including search and Bard.
It will only be available in the US to start with.
Google has so far struggled to attract as much attention and as many users users as OpenAI's viral chatbot ChatGPT.
But it claims the most powerful version of Gemini outperforms OpenAI's platform GPT-4 - which drives ChatGPT - on 30 of the 32 widely-used academic benchmarks.
However, a new, more powerful version of the OpenAI software is due to be released next year, with chief executive Sam Altman saying the firm's new products would make its current ones look like "a quaint relative".
It remains to be seen whether the recent turmoil at OpenAI - which saw Mr Altman fired and rehired in the space of a few days - will have any impact on that launch.
The firm also faces fresh competition from Elon Musk's xAI, which is seeking to raise up to $1bn to invest in research and development. Chinese firm Baidu is also racing ahead with its own AI products.
But as the technology rapidly evolves, so do fears about its potential to cause harm.
Governments around the world are trying to develop rules or even legislation to contain the possible future risks of AI.
In November, the subject was discussed at a summit in the UK, where signatories agreed a declaration calling for its safe development. The King also said possible dangers needed to be addressed with a sense of "urgency, unity and collective strength".