New words – 13 November 2023

an abstract, pixellated pattern of connected pale blue lights on a dark blue background, with the word AI in blue lights in the centre
MR.Cole_Photographer / Moment / Getty

AGI noun [U]
/ˌeɪ.dʒiːˈaɪ/
ABBREVIATION FOR artificial general intelligence: a type of artificial intelligence that some people believe will be developed in the future, with the ability to learn to solve any kind of problem as well as, or better than, a human being

He defines AGI as AI systems that can solve any cognitive or human task in ways that are not limited to how they are trained. In theory, AGI, he says, can help scientists develop cures for diseases, discover new forms of renewable energy, and help “solve some of humanity’s greatest mysteries.”
[businessinsider.com, 27 May 2023]

See also artificial intelligence

Poltergeist attack noun [C]
UK /ˈpɒl.tə.ɡaɪst əˌtæk/ US / ˈpoʊl.t̬ɚ.ɡaɪst əˌtæk/
a way of using high-frequency sounds to cause the machine learning algorithms used by self-driving cars to make mistakes in identifying people, objects and other vehicles, which could cause accidents

Poltergeist attacks diverge from traditional cyber threats, such as hacking or jamming. They create deceptive visual realities, similar to optical illusions, for machines employing machine learning for decision-making processes.
[techtimes.com, 26 September 2023]

superalignment noun [U]
UK /ˌsuː.pə.rəˈlaɪn.mənt/ US /ˌsuː.pɚ.əˈlaɪn.mənt/
the study of how to control superintelligent AIs that may be built in the future so that they act in ways that are useful and not harmful to human beings

OpenAI co-founder Ilya Sutskever and head of alignment Jan Leike wrote a blog post on the concept of superalignment, suggesting that the power of a superintelligent AI could lead to the disempowerment of humanity or even human extinction. “Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” the pair wrote.
[techmonitor.ai, 6 July 2023]


ground verb [T]
/ɡraʊnd/
to give an AI model facts about the real world so that it will produce information that is more accurate and useful

As we start to see more applications built upon foundational AI models — we will also see an increase in the use of external datasets, articles, networks and databases to “ground” the model to factual data and relevant user context.
[medium.com, March 2022]

ghost work noun [U]
UK /ˈɡəʊst ˌwɜːk/ US /ˈɡoʊst ˌwɝːk/
work done by a human being, usually online and for low pay, to do a task that most people believe is done automatically by a computer

“Ghost work” is anthropologist Mary L. Gray’s term for the invisible labor that powers our technology platforms. When Gray, a senior researcher at Microsoft Research, first arrived at the company, she learned that building artificial intelligence requires people to manage and clean up data to feed to the training algorithms.
[theverge.com, 13 May 2019]

voice cloning noun [U]
UK /ˈvɔɪs ˌkləʊ.nɪŋ/ US /ˈvɔɪs ˌkloʊ.nɪŋ/
the use of artificial intelligence to make recordings that sound like the voice of a specific person

… it looks like the system harnesses the power of voice cloning, which has grown in popularity in recent years. The technology works by taking samples of your voice; a computer model is then trained to generate speech in your voice based on whatever text input it applied.
[ukpcmag.com, 25 September 2023]

RLHF noun [U]
/ˌɑː.rel.eɪtʃˈef/
ABBREVIATION FOR reinforcement learning from human feedback: a technique that improves the performance of an AI by getting human beings to provide information about how good the results it currently produces are

Reinforcement Learning From Human Feedback (RLHF) is an advanced approach to training AI systems that combines reinforcement learning with human feedback. It is a way to create a more robust learning process by incorporating the wisdom and experience of human trainers in the model training process.
[unite.ai, 29 March 2023]

About new words

8 thoughts on “New words – 13 November 2023

  1. Yes – hallucinations are so often about the senses or what is sensible.

    Delusions, on the other hand, really are about thoughts and their structures.

    And even about emotions and actions.

    I did wonder if the Poltergeist attacks were another manifestation of AI hallucination in autonomous vehicles.

    ***

    The AI concept which fascinated me most in this set of words:

    Reinforcement by human feedback.

    I had that concept long before the words.

Leave a Reply