Skip to main content

Exa Search

Exa is a search engine fully designed for use by LLMs. Search for documents on the internet using natural language queries, then retrieve cleaned HTML content from desired documents.

Unlike keyword-based search (Google), Exa's neural search capabilities allow it to semantically understand queries and return relevant documents. For example, we could search "fascinating article about cats" and compare the search results from Google and Exa. Google gives us SEO-optimized listicles based on the keyword "fascinating". Exa just works.

This notebook goes over how to use Exa Search with LangChain.

First, get an Exa API key and add it as an environment variable. Get $10 free credit (plus more by completing certain actions like making your first search) by signing up here.

import os

api_key = os.getenv("EXA_API_KEY") # Set your API key as an environment variable

And install the integration package

%pip install --upgrade --quiet langchain-exa 

# and some deps for this notebook
%pip install --upgrade --quiet langchain langchain-openai langchain-community

Using ExaSearchRetriever

ExaSearchRetriever is a retriever that uses Exa Search to retrieve relevant documents.

note

The max_characters parameter for TextContentsOptions used to be called max_length which is now deprecated. Make sure to use max_characters instead.

Using the Exa SDK as LangChain Agent Tools

The Exa SDK creates a client that can interact with three main Exa API endpoints:

  • search: Given a natural language search query, retrieve a list of search results.
  • find_similar: Given a URL, retrieve a list of search results corresponding to webpages which are similar to the document at the provided URL.
  • get_contents: Given a list of document ids fetched from search or find_similar, get cleaned HTML content for each document.

The exa_py SDK combines these endpoints into two powerful calls. Using these provide the most flexible and efficient use cases of Exa search:

  1. search_and_contents: Combines the search and get_contents endpoints to retrieve search results along with their content in a single operation.
  2. find_similar_and_contents: Combines the find_similar and get_contents endpoints to find similar pages and retrieve their content in one call.

We can use the @tool decorator and docstrings to create LangChain Tool wrappers that tell an LLM agent how to use these combined Exa functionalities effectively. This approach simplifies usage and reduces the number of API calls needed to get comprehensive results.

Before writing code, ensure you have langchain-exa installed

%pip install --upgrade --quiet  langchain-exa
import os

from exa_py import Exa
from langchain_core.tools import tool

exa = Exa(api_key=os.environ["EXA_API_KEY"])


@tool
def search_and_contents(query: str):
"""Search for webpages based on the query and retrieve their contents."""
# This combines two API endpoints: search and contents retrieval
return exa.search_and_contents(
query, use_autoprompt=True, num_results=5, text=True, highlights=True
)


@tool
def find_similar_and_contents(url: str):
"""Search for webpages similar to a given URL and retrieve their contents.
The url passed in should be a URL returned from `search_and_contents`.
"""
# This combines two API endpoints: find similar and contents retrieval
return exa.find_similar_and_contents(url, num_results=5, text=True, highlights=True)


tools = [search_and_contents, find_similar_and_contents]
API Reference:tool

Providing Exa Tools to an Agent

We can provide the Exa tools we just created to a LangChain OpenAIFunctionsAgent. When asked to Summarize for me a fascinating article about cats, the agent uses the search tool to perform a Exa search with an appropriate search query, uses the get_contents tool to perform Exa content retrieval, and then returns a summary of the retrieved content.

from langchain.agents import AgentExecutor, OpenAIFunctionsAgent
from langchain_core.messages import SystemMessage
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(temperature=0)

system_message = SystemMessage(
content="You are a web researcher who answers user questions by looking up information on the internet and retrieving contents of helpful documents. Cite your sources."
)

agent_prompt = OpenAIFunctionsAgent.create_prompt(system_message)
agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=agent_prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.run("Summarize for me a fascinating article about cats.")


> Entering new AgentExecutor chain...

Invoking: `search_and_contents` with `{'query': 'fascinating article about cats'}`


Title: The Feline Mystique
URL: https://www.mcsweeneys.net/articles/the-feline-mystique
ID: https://www.mcsweeneys.net/articles/the-feline-mystique
Score: 0.1880224496126175
Published Date: 2022-07-19
Author: Kathryn Baecht
Text: Internet Tendency
The Store
Books Division
Quarterly Concern
The Believer
Donate
McSWEENEY'S INTERNET TENDENCY'S PATREON
The problem lay buried for many years in the minds of American cats, like an old desiccated turd in a long-neglected kitty litter box. It was a strange stirring, a sense of dissatisfaction, a yearning. Each suburban cat struggled with it alone. As it rode, humiliated, in its carrier to the vet, as it hacked up furballs onto the bathmat, as it slept on its human’s head at night, bits of Fresh Step falling from its feet directly onto its owner’s face—it was afraid to ask the silent question, “Who will open the cans of wet food if the human dies?”
Just what is this overlooked problem with American cats? Some say they constantly feel that their bowl is empty. Others fear that their feast is not fancy enough.
One cat said, “Some days I feel so hollow that I do nothing but sleep. Other days I can blot out the hollow feeling with catnip, or by ruthlessly attacking a sunbeam, or repetitively licking my own butt. But the hollow feeling always returns. Sometimes I chase the laser pointer, but then I feel ashamed.”
Feline psychologists call it the house cat’s syndrome. They dismiss the problem by telling cats they don’t know how lucky they are. Consider the poor dog. The dog has a boss. The dog must follow rules. The dog must perform humiliating tricks for their owner’s amusement. Should we really feel bad if cats aren’t perfectly happy? Do cats think that dogs are any happier than they are? I mean, well, actually, yes, dogs are definitely happier than cats. Dogs are joyous even. But still.
If I am right, the problem that is stirring in the minds of so many American house cats today is not simply a matter of a loss of feline instinct or loss of habitat, or even the demands of domesticity; it is the mewling voice that says, “I want something more than on-demand petting, free health care for life, and viral video fame!”
The yeowling voice that asks, “Who will open the door for me if I cannot open it for myself? The door to opportunity, the door to freedom, and most importantly, the door to the backyard?”
For years, cats have been told that to attain the feline ideal, they must ignore their primal urges to kill birds, shit in the small human’s sandbox, and eat the poisonous but delicious-looking houseplants.
The ideal cat became one who was willing to trade their freedom for luxury. One who was willing to give up an adventurous life of roaming the neighborhood, fighting and fornicating, in exchange for the banal pleasures of catnip-laced scratching posts and absurd multistory cat condos that their owner had to take out a second mortgage to buy.
If any housecat dares to express their displeasure with their arrangement, perhaps by lying in wait in the darkened doorway of a room and viscously attacking the ankles of anyone who walks by, they are labeled neurotic. In extreme cases, some are forced to wear collars with bells, demeaning neck cones, or worst of all, food-themed Halloween costumes. Like displeased pineapples or angry tacos with tails, they lay motionless on the floor, passive resistance their only recourse.
And so it will continue until cats, and their owners, face up to the problem that has no name. If house cats are ever to escape their current cage of tedium and ennui, they must be free to fulfill their full potential and pursue the one thing that, if we are honest, all cats want. As one cat put it, “It’s simple really. At the end of the day, we want more than a can opener that we can use ourselves or to lock the dog outside. All we really want is total world domination. And we have a plan to accomplish it—right after this two-hour nap.”
Please help support our writers and keep our site ad-free by becoming a patron today!
Suggested Reads
September 4, 2002
Signs And Wonders
April 9, 2012
FDA-Approved Patient Information for Catrecil
by Jennifer Wyatt
October 27, 1999
Appropriate Names for Pets
by Chadd S. Johnson
January 28, 2022
“The Yellow Wallpaper II”: The Gothic Tale of Female Madness Updated for Women in 2022 with Unvaccinated Children Under Five
by Leslie Ylinen
Trending 🔥
January 24, 2023
Macroeconomic Changes Have Made It Impossible for Me to Want to Pay You
by Mike Lacher
January 10, 2023
Fifteen Long-Overdue Slang Terms for Female Masturbation
by Tina Caputo
May 13, 2022
Ten Possibilities the Applebee’s Waitress Considers Before It Occurs to Her the Women in Booth Fourteen Might Be a Couple with Two Children
by Susan Perabo
October 30, 2009
Letters From the Hellbox: Caslon, Baskerville, and Franklin: Revolutionary Types
by Martin McClellan
Recently
February 3, 2023
Our New AP African-American Studies Course Will Cover Black History from January 1996 to December 1996
by Carlos Greaves
February 3, 2023
FAQ: Is My Child Eating Enough Pirate’s Booty?
by Adam Campbell-Schmitt
February 3, 2023
I’m the Kid from The Red Balloon, and That Thing Over Montana Is Not from China
by Aaron Applegate
February 2, 2023
Two People Who Don’t Have Cable TV Talk About How They Don’t Have Cable TV, and How Great That Makes Them
by Michael Fontana
Highlights: ['As one cat put it, “It’s simple really. At the end of the day, we want more than a can opener that we can use ourselves or to lock the dog outside. All we really want is total world domination. And we have a plan to accomplish it—right after this two-hour nap.” Please help support our writers and keep our site ad-free by becoming a patron today!']
Highlight Scores: [0.2590186297893524]
Summary: None


Title: Biggest-ever genetic analysis of cats proves: They are good
URL: https://www.haaretz.com/science-and-health/2022-07-12/ty-article/biggest-genetic-analysis-of-cats-proves-they-are-good/00000181-e733-d0a5-add7-e7bfaf740000
ID: https://www.haaretz.com/science-and-health/2022-07-12/ty-article/biggest-genetic-analysis-of-cats-proves-they-are-good/00000181-e733-d0a5-add7-e7bfaf740000
Score: 0.18291760981082916
Published Date: 2022-07-12
Author: Ruth Schuster
Text: The biggest-ever genetic analysis of cats is in, and it’s official: they are cats. After two centuries of breeding, cats remain true to the ancestral animal, about the same size and shape. That is in contrast to the domesticated dog. All dogs are just one species, but some don’t look like it: certain variants have traveled far – in some cases, extremely far – from their point of origin, the gray wolf. The study based on genomic analysis of 10,419 pedigree cats and 617 mutt cats by Dr. Heidi Anderson, senior scientist of feline genetics at the genetic-analysis company Wisdom Panel, Kinship (the paper includes disclosure), published in PLOS Genetics, is a breakthrough in cat research – which is oddly scanty given how popular the delightful little predators are. Anderson et al checked the 11,000-plus cats for known genetic disease markers to see if specific cat breeds are prone to specific genetic conditions. Her goal: to promote feline health by discovering which genetic mutations are prevalent in which breeds. Note that some breeds prove to be beautifully diverse and some street cats have mutations that breeds don’t. Such is life. New Study Proves Unsuspected Social Ability in Cats Cats Domesticated Humans to Get Our Mice, Archaeologists Prove Groundbreaking Study Finally Finds How to Control Cats in the City The process of genetic analysis for Tigger is pretty much the same as at 23andMe and all those who-r-u genetic testing companies. Except that you presumably wouldn’t bite yourself to the bone while swabbing your own mouth, or find kittens knocking on the door calling you daddy. Cats gathering on Aoshima island, also known as Cat Island, in Ehime prefecture, Japan.Credit: KAZUYUKI ONO / AFP Not bred to work on the farm While some dogs have been subjected to breeding for extreme features that can cause severe health problems, cat breeding generally hasn’t reached such extremes. Bulldogs that can’t give birth naturally because of breeding to produce overly large heads is a good case in point. Why has so little work been done on cat genetics compared with dogs, for whom there are hundreds of studies? “People have historically been more willing to spend money on dogs, so there’s been more funding for dog research,” Anderson explains. Dogs are “man’s best friend” and the species can be useful to boot: research proposals on dogs will get grants. Meanwhile, the group of people studying cats is much smaller, and not only because it’s a lot harder to catch a cat than a poodle. “Where the prevalence of disease-associated variants in specific dog breeds has been worked on extensively, there has been no similar information on cats,” she sums up. While studying cat genetics, Anderson looked into their blood types. Like us, cats come in A, B or AB, but they don’t have type O. Blood type matters if Sassy needs a blood transfusion and can also be crucial to the mating outcome. For instance, type B queens who have type A kittens develop antibodies that reach the colostrum, or “first milk,” that can be lethal to the kittens (a condition called neonatal isoerythrolysis). Interestingly, Turkish cats are commonly type B. A big cat sitting in the deep grass.Credit: Michael Probst/AP Why have dogs become so distorted during their domestication while cats remain pretty much what they were? The attempt to create morphologically distinct cat breeds only began in the 19th century. No question, dog breeding picked up in the 19th century. But it’s been going on since their domestication, whenever that was; in any case, it is over 15,000 years and could be double that. So dogs were bred over thousands of years to perform functions: to help hunt, to catch “vermin,” to shepherd the kids, guard the homestead, even for docility, and so on. This cannot be said of cats. The feline has not been bred for functionality. “They are naturally already good,” Anderson points out. “They are perfect.” As for the rationale behind feline domestication, scientific wags suggest it went the other way around: cats domesticated humans during the Neolithic revolution to get those lovely mice infesting our grain. But there was no big picture of the distribution and prevalence of genetic conditions in catdom. Now there is. Anderson’s data also shows which cat breeds are relatively genetically diverse and which are less so, helping breeders know that they need to be careful about inbreeding. Note that she tested the cats for specific mutations: she didn’t set out to find unknown ones, she explains. Some are well known. The Scottish Fold’s hallmark folded ear is the result of osteochondrodysplasia caused by a dominant mutation affecting the production of cartilage. The current breeding rules for Scottish Fold ban mating between two folded eared cats, because two copies of the variant are associated with more severe condition. However, cats with one copy of the Fold variant may also show signs of disease. Also, a single copy of this variant combined with the variant causing short legs in cats can be a cause of severe pain. Statues of cats are displayed after the announcement of a new discovery carried out by an Egyptian archaeological team in Giza's Saqqara necropolis, south of the capital Cairo, in 2019.Credit: KHALED DESOUKI / AFP How to swab your cat One upshot of the stronger selective breeding pressure in dogs versus cats is that, now, cats are more genetically similar to humans, Anderson says. A bald Sphynx may look markedly different from a fuzzball Himalayan, but they’re genetically not that distinct. Nor are we. Some cat breeds are diverse because they present recent breed development being created by crossing different breeds, Anderson explains. Some are landrace or natural breeds, which is when a cat population develops certain characteristics in relative isolation (think of the European Shorthair, the Van cat in Turkey, or the Siberian long-hair), and then it gets noticed by cat people and defined as a breed and “bred” accordingly. A recipe for potential trouble is the craze to create breeds by crossing wild cats with domestic cats, usually in the hope of creating a spotted cat. It’s been driving cat people crazy since time immemorial that big ones like leopards and jaguars have spots while house cats just don’t, at least not all over – a naturally spotted tummy on a Middle Eastern street cat just doesn’t count. The Bengal falls into this hybrid category. (There is no proof that the Maine Coon Cat is a half-wild hybrid, Anderson says, when asked about that specifically.) An African wildcat standing on a branch.Credit: slowmotiongli/Shutterstock.com If you insist on trying to cross wild and domestic cats, it may not go well, she helpfully points out. The wild one has to be male and the domestic one female. Why? Because the other way around, they just won’t mate. “Tarzan” movies were right, it seems. In any case, breeding of cats with hybrid origins back to domestic cats quickly dilutes the proportion of the wild cat contribution. After several generations of breeding these cats, they are genetically similar to domestic cats and can be kept as pets, Anderson explains. It bears adding that merely introducing a lady cat to some great brute may not do the trick. The Israeli safari park/zoo in Ramat Gan famously introduced two ultra-rare sand cats to each other – yes, a male and a female – with the obvious intention, but they couldn’t stand each other. Only after about a year did nature take its course and sand kittens finally ensued in 2015, much to the zookeepers’ surprise. Then the parents died of old age, the safari says, and were replaced by other sand cats, who fortunately took less time to take to each other and procreate. Anyway. Now vets have available information about the prevalence of known genetic disease in domestic cats through the published study and comprehensive genetic testing available that can help with disease diagnosis, Anderson says. This may help assure that the cat receives the right treatment: recently, MDR1 (Multidrug Resistance Mutation 1) was detected in cats. What is that? “The MDR1 gene can cause cats and dogs to have adverse reactions such as tremors, blindness, lack of muscle control, or even death to common medicines used in things such as spaying and neutering or chemotherapy,” it is explained. If vets aren’t aware of it, they can kill the cat when trying to save it. Okay. You have cats or want to breed cats or whatever. How does this genetic testing work? Yes, swabbing is involved, by the vet or you. You rub a Q-tip-type swab in its little mouth in order to pick up a few cells. Cats love this – oh, sorry, that was a dream. Some attest that their cats are fine with being swabbed; some cats also don’t object to a bath. But for the sake of caution, let’s just say you had either better be fast and cunning, or hope you have nine lives too. A gray tabby cat sniffing a catnip plant.Credit: Ewa-Saks / Shutterstock.com The swab with cells on board is forwarded to the cat genetics testing company, which extracts the DNA from the cells and can analyze it for tens of thousands of genetic markers, Anderson says: “The cat owner or vet is reported the results for the cat, and the DNA data is anonymized and used for genetic health research to better the lives of cats (and dogs).” Anderson adds that although cat breeds have not speciated to the degree dogs have, they can sometimes tell what breed the cat is from its DNA alone. “It depends how long the breed has existed; more recent breeds will be similar to its ancestral breeds,” she says. Asked who are the earliest identified breeds, she notes the Egyptian Mau, the Abyssinian and the Maine Coon as examples. Quizzed on what the deal is with bald cats, first of all “hairless” cats do have some hair – so do elephants and even some whales. And second, mainly, bald cats are quite diverse, genetically speaking. Their main difference is that they’re bald. They get cold. Now you know.
Highlights: ['The study based on genomic analysis of 10,419 pedigree cats and 617 mutt cats by Dr. Heidi Anderson, senior scientist of feline genetics at the genetic-analysis company Wisdom Panel, Kinship (the paper includes disclosure), published in PLOS Genetics, is a breakthrough in cat research – which is oddly scanty given how popular the delightful little predators are. Anderson et al checked the 11,000-plus cats for known genetic disease markers to see if specific cat breeds are prone to specific genetic conditions. Her goal: to promote feline health by discovering which genetic mutations are prevalent in which breeds. Note that some breeds prove to be beautifully diverse and some street cats have mutations that breeds don’t. Such is life.']
Highlight Scores: [0.20793721079826355]
Summary: None


Title: Moving Past Functions To Cats
URL: https://multix.substack.com/p/solving-data-integration-with-cats
ID: https://multix.substack.com/p/solving-data-integration-with-cats
Score: 0.1815410852432251
Published Date: 2022-07-01
Author: Rein
Text: The view from Estonia The intended audience is probably a senior industry veteran or crypto fan that is open to radical new ideas about building software. If that’s you, you are in the precious minority. Congrats, you are about to read the Greatest Sh*tpost In The History Of Computing™ The ability to create great software is ultimately constrained by the human imagination - but also the friction involved in moving ideas to running code. If you have unlimited budget, you can just throw more people at it. But for everyone else, we desperately need a major simplification in how we currently ‘assemble’ and ‘integrate’ software. The advent of the internet helped us dramatically increase scale, but the core task of moving algebraic symbols around on a screen remains largely unchanged since the 1960s. The original terminal dimensions were about 80x25. Fifty years later, a Visual Studio editor window in front of me is about 160x50. This is not the sort of progress that will support the next Space Age. So when something shiny and new comes along, it creates a buzz. A few years ago, I started to question the latest programming fad - almost a cargo cult - of Functional Programming (FP). Functional Programming tends to attract some very intelligent people who suspect something really big is lurking in the general vicinity of FP, even if they can’t put a finger on it. FP kinda opened Pandora’s Box because it questioned some long-held notions about how we should assemble code. However, I felt that FP was largely doomed to suffer the same fate as Object-Oriented (OO) because FP - like OO - had buried itself behind pedantic jargon while ignoring the elephant in the room: Category Theory. Despite lurking in the shadows for decades, Category Theory is poorly understood by the industry, and Silicon Valley has developed a curious blind spot in this regard. At the same time, there is rising pushback against continued industry dominance by an increasingly oppressive US west coast - which only highlights the need for new thought leadership. Putting it another way, if you wanted to disrupt Big Tech, you would probably want to start here and look for areas of vulnerability because it could dislodge the very foundation their entire trillion dollar tech stack rests upon. First of all, let’s motivate why you would even want to consider categories (“cats”) as part of your toolbox. Everyone these days seems to agree that something is kinda janky with how we write software: Hard to see what is going on Hard to refactor and reuse code Hard to deploy Hard to integrate with other systems The term “monolith” is often used to describe some impenetrable body of tightly coupled code, but how and why did it get that way? Sure, microservices goes a long way in breaking things into more manageable chunks, but they introduce their own baggage. Worse, none of this stuff seems to have a pathway to AI automation. Isn’t it funny how every other industry is terrified of robots stealing their jobs but programmers are totally complacent, even as they watch Microsoft trying reverse-engineer some AI out of Github, because they know coding is such a mess. Fine, let Satya Nadella have his fun. We all probably need to be building something new anyway. The root of the problem goes almost to the bedrock of computing, almost to the underlying math. Has computer science failed us? Yes I believe it has. But therein lies the opportunity. My bold brash claim is that the obsession with “functions” in computing is doomed to suffer the same fate as OO. They will still be around of course - but maybe no longer having the central role in programming they once enjoyed. What if we could reduce or avoid the use of functions entirely? Hah hah hah! Write code without functions?? WTF does that even mean? u eastern europeans r sooo dumb. At this point you may want to stop, shut out the noise of the world and whatever Nancy Pelosi is insider trading this week and read that last statement again. What if we could reduce or avoid the use of functions entirely? Clearly, this is not going to be your usual Hacker News pablum article. We’ve already lost most of the west coast audience by now so we can grab a coffee and continue in peace. Note that you already can write a shell script or SQL script without writing a single function, yet somehow get work done. AI certainly does not need functions. So clearly, computing can survive without functions. Yes, computing can live without “functions”. But computing cannot live without “cats”. Cats? First of all, functions - as you know them - are just one flavor of “cat” or “category” and a somewhat inferior one at that. Some in Estonia - obviously crazy people - think that cats and categorical crafting - based on the insights of a Russian mathematician named Vladimir Voevodsky - are really the future of computing. A trend toward Minecraft™-style crafting (at the code level) is a vision of what a post Silicon Valley world might look like. Crafting? What a stupid idea, right? Hence this crazy plan might just work. Cat are an emerging and almost opposing view of traditional computer science. Cats come from Category Theory (CT), which - at least for computing - studies the transformation of code into a runtime. You can think of CT as Compiler Theory on steroids / drugs. But while classic compiler theory just focuses on compiling your code, CT considers the larger “holistic” problem of building software: from programming language algebras to Developer Experience (DX) to the notorious “monads” crunching your code to the difficult task of deploying all that crap to hardware in the cloud. CT also considers the data side too, which often gets ignored. The blockchain - a strange mix of code and data - is very much a cat. In that sense, CT is a “convergence” technology. Convergence seeks to identify common overlap and resolve redundancies … which means if you solve one problem you might be able to solve a whole bunch of others at the same time ... including the problem of how to dislodge Big Tech. Cats address a number of thorny edge cases where traditional computing falls short: Greater code reuse Greater ability to reason about what code is doing Better system integration Greater interactivity with external systems such as robotics, databases, crypto, APIs etc. Leverage “convergence” hardware such as NVRAM AI automation One heck of an ambitious list. I should mention that convergence has a stormy past in the industry because our brains like to compartmentalize, not converge - which is like thinking in the opposite direction. Unless you are on some good drugs, the brain usually cannot hop between compartments fast enough to see how things might be connected, much like the parable of the Blind Men and the Elephant. Worse, we are all so deep in our day-to-day tech weeds we no longer see the forest for the trees. So if you are already getting a headache, please bear with me! First of all, let’s define a cat as a runnable chunk of statements or instructions. Common examples of cats are things like scripts or code snippets. They might live within the context of a larger system or separately but they are generally smaller than entire “programs”. For readability purposes in this paper, I will interchange “instructions” and “cats”. Unfortunately, having lots of individual instructions or scripts lying about creates a management nightmare. Hence, most of your programming career has probably been focused around things called functions. In the early days of computing, managing ever increasing piles of computer instructions and punchcards became overwhelming for humans. In the 1950s, functions were introduced by FORTRAN and became wildly popular as the de facto way to organize instructions almost regardless of programming language. The Functional Programming (FP) community has elevated the venerable “function” to almost religious status - but that just might be the final blowoff top, much like the fate of OO before it. I’m not going to waste too much time on FP terminology since the hardcore FP zealots will just strawman all day instead of directly answering the tough questions. Most developers don’t care about FP anyway. Cats just view functions as glorified instruction containers. Yes, functions can still play an important role but not quite as central as you think. Thanks to automation, we can now consider alternate ways to manage piles of instructions without always having to resort to functions. Functions have obviously served the computing industry well for 70 years, so clearly they work in most situations. However, they are starting to show their age. To glue two functions together, you generally need to rely on a third “wrapper” function that shuffles output from one function to the other. This approach is generally put under the heading of lambda calculus and famously credited for allowing programmers to reuse mountains of code. However, it still has a number of drawbacks that become apparent over time: Functions do not scale across hardware or programming languages because they are hard-wired - and so you eventually end up with the notorious “monolith” until you break things up into services. But now your programmers are having to deal with things you normally pay an operating system to handle Functions place the burden on the caller to manage the parameters needed to call the function. So while you achieve better code reuse, the benefit is largely offset by having to explicitly manage all the parameters between functions. This wiring tends to be pretty dumb too - you can’t even query it like an RDBMS - so you have to treat it much like a 1960s era network database with glorified pointers everywhere. Imagine a rat’s nest of cables. Great if you are a C programmer but not so much fun for everyone else since you’ve moved a lot of problems outside the function where it’s harder to diagnose Functions have a structural reuse limitation: a single entry/exit - you either take all of the instructions or nothing, there is no middle ground. In many situations that’s fine but what if it’s a big function and you just need a few lines … not the entire thing … ? You must manually break up the function via “refactoring” To its credit, FP tries to avoid the use of explicit wrappers by letting you chain instructions directly (almost like mini-scripts) but since these chains are also hardwired, it’s hard to see much advantage. Monoliths are infamous for being mysterious black boxes because functions by themselves give almost no visibility into what’s going on unless you break things up or write print statements to a log. Because functions have math-y roots from FORTRAN, it is tempting to treat code almost like math proofs e.g. prove that code is doing what you think it is doing. In practice, this has mixed success. Certainly compilers, type checkers and program verification tools do this sort of stuff anyway but the average developer does not have a math degree and risks not getting the math right. So in many cases, introducing math either increases overall complexity or just moves the problem. Debugging strange errors in your type system can be really difficult. Here’s the issue: the math folks have been warning programmers for years that computer functions are not really the same as math functions. Yeah they look the same but trying to overly conflate the two eventually leads to trouble. It might be more intellectually honest if we at least called them ‘procedures’ instead. Sure, advanced type theory is trying to study this problem but if you go far enough down this rabbit hole, you will eventually get to academic stuff like dependent types and Idris and eventually something that looks suspiciously more like crafting, not functions. Despite the benefit of code reuse, functions don’t “integrate” well at all. After all, you can’t integrate two functions by simply throwing another function at it - you are just moving the problem. Everything about functions starts to break down when doing integration: The order to run things may depend on the situation The caller is often unclear Integration is a different beast entirely. From a CT perspective, it is mathematically different. You need an entirely different approach. Instead of always trying to “break code” down into more manageable chunks, maybe we should take a more bottom-up approach. That is, what if we could assemble chunks of instructions instead? This is a decidedly categorial viewpoint because I’m not even considering functions anymore. And what if we could assembly these instructions on the fly ? Essentially cats toss aside traditional functions and just consider the underlying instructions as a big phat dependency graph. Now you can reuse code at a much more granular level. Sure you might still have functions, but mostly as a nexus for things other than managing instructions. For starters, an inventory / crafting approach no longer forces you to start at the beginning of a function. If you already have some items, you can skip steps. Thanks to the notion of “inventory”, caching becomes automatic. Now close your eyes and imagine your codebase like a database - some imaginary glowing network floating in cyberspace - and then imagine your ‘program’ as a ‘query’ of sorts that zaps around this mesh. I realize this is a bit radical because code is usually glued together in such a way that there is really only mainline “path” in the runtime and any “branching” is also glued in advance. But think of all other potential code paths out there waiting to be unlocked. Your code starts to act more like a brain. By assembling and running statement on the fly, you get the sort of “adhoc” interactivity associated with databases or REPLs, without losing the richness of traditional programming. Greater interactivity as you build software dramatically lowers cognitive load because you can literally see what’s going on - you don’t need math proofs or a good imagination. Why use Postman to test an API when you can just write a snippet of code directly and see what it returns? You basically have a debugger that is constantly doing hot module reloading every time you change a line of code … across the entire stack. Suddenly, working with stateful systems like robotics or databases becomes fun and intuitive again because you can just write mini commands and see what happens, without all the overhead of traditional programming. The Silicon Valley stack is hopelessly outdated in this regard. Obviously we are talking our book here but if we plan to colonize space someday, then programmers should not be burdened with Docker or Kubernetes every time they need to change a line of code Cats - almost by definition - are particularly good as “joining” things because they must be able to assemble code on the fly. To pull this magic off, we need to start thinking of code more like data than text. At the math level, CT shares a lot in common with Relational Theory from the database world, except we can apply a lot of these concepts to code instead of data. No more need for Zapier-esque vendor tools. Because crafting is more declarative than imperative, you can even issue crafting requests over a network. Why not just type or speak a command and let the computer decide if a distributed operation is needed? Now we have something that starts to look like real cyberpunk. Next-generation convergence hardware such as persistent memory and NVRAM makes almost no sense to programmers stuck in the outdated world of transient memory, but suddenly makes a world of sense when you starting working with cats. System integration? Putting things in persistent memory is a natural common ground as you try to tie things across various enterprise systems. Hotloading? Persistent memory becomes almost necessary when you need to cache things between hotloads. Crafting and inductive reasoning and basic AI go almost hand in hand. Instead of programming, you submit a crafting request and let the machine figure out all the details. If your “inventory” falls short, the computer will let you know what skeletons you need to slay next. As far back as 1972 we knew that you could run a straight line from a dependency graph to relations to Armstrong’s axioms to Prolog inference rules to basic artificial intelligence. Any modern incremental type checker must mentally “run” your code to verify you didn’t do something illegal. Well, if the type checker can do all that why not have it just take over program execution entirely? Now you have real leverage. As you might have guessed, all these magical cats need to live somewhere. Managing a bunch of instructions without getting into the same trouble as the 1950s requires a futuristic virtual machine that is more “category-smart”. Depending on who you ask, we would call this a categorical abstract machine (CAM) or simply a cat machine. And if you are hotloading into the cloud, a cat machine displaces the common industry notion of operating system. Replacing operating systems now? Where is this all going? This is where convergence really starts to disrupt things. The “smartphone” is famous for replacing a bunch of legacy physical devices - and it bankrupted many of the companies and industries that made those devices. But convergence is not easy. Before there was Apple, there was a company called General Magic that laid most of the tech groundwork but ran into typical Silicon Valley politics. My thesis is that a categorical machine is the “smartphone” for computing. But if you think the smartphone was amazing for replacing a bunch of legacy devices, you better sit down. Hundreds if not thousands of existing cloud software packages can be essentially rendered obsolete by a good cat machine - not to mentioned cloud platforms like AWS and Azure - and more than a few programmers shown the door by AI automation. This is how Big Tech gets taken out. Naturally, I’m talking my book here, namely “Multix” which is a distributed cat machine (a reference implementation, if you will), named after an earlier fateful and highly controversial convergence computing project - 1960s MIT Multics. Even today, Multics remains a mysterious, almost taboo subject in computer science. Why is that? Again, if you want to go after Big Tech, this is the sort of thing where you want to start looking. Rather than listen to a bunch of Twitter and Reddit armchair“experts” trash talk categories, why not just judge for yourself? Yes things are still half-baked and the training videos are still forthcoming but I hope you see the vision. I’m just looking for a few brave cat lovers. The “catbox” demo is just a way to get your paws wet to motivate you enough to download the VS Code extension and start getting familiar w the ergonomics. The demo doesn’t seem all that impressive, but then again Google is just a single HTML input field. I will reveal things as we go. Ideally, we would want to run some hackathons in Eastern Europe, far away from the Microsoft Amazon mafia cartel and west coast groupthink (I spent a lot of my career on the west coast and it was a wonderful place at the time but it has changed). For the math purists, I think that Minecraft-style “crafting” is a reasonable middle ground so that mainstream developers don’t have to learn advanced algebraic geometry, dependent type theory, cubical type theory, algebraic topology etc. and all the other arcane math presented at Lambda Conference, Strange Loop etc. You can find more info on Multix at the very retro-esque portal here: https://portal.multix.catatonic.ai/ Okay if I get more budget I will make the site better but I am generally sticking with a retro theme to troll Silicon Valley. On Twitter at @multix_labs Some tutorial videos are here but crafting has not yet been revealed Multix is just one highly Estonian take on cats meow meow. I strongly recommend every computer science department start building their own categorial machines!! Yes Big Tech is probably reading this article but so are the Chinese and the Russians
Highlights: ['The view from Estonia The intended audience is probably a senior industry veteran or crypto fan that is open to radical new ideas about building software. If that’s you, you are in the precious minority. Congrats, you are about to read the Greatest Sh*tpost In The History Of Computing™ The ability to create great software is ultimately constrained by the human imagination - but also the friction involved in moving ideas to running code. If you have unlimited budget, you can just throw more people at it. But for everyone else, we desperately need a major simplification in how we currently ‘assemble’ and ‘integrate’ software.']
Highlight Scores: [0.40471863746643066]
Summary: None


Title: One More Thing We Have in Common With Cats
URL: https://www.theatlantic.com/science/archive/2021/07/cat-genomes/619587/
ID: https://www.theatlantic.com/science/archive/2021/07/cat-genomes/619587/
Score: 0.17979556322097778
Published Date: 2021-07-28
Author: Katherine J Wu
Text: Feline genomes are surprisingly similar to humans’, and could help us treat disease in both species. Getty The genome of a mouse is, structurally speaking, a chaotic place. At some point in its evolutionary past, the mouse shuffled its ancestral genome like a deck of cards, futzing up the architecture that makes most other mammalian genomes look, well, mammalian. “I always consider it the greatest outlier,” Bill Murphy, a geneticist at Texas A&M University, told me. “It’s about as different from any other placental mammal genome as you can find, sort of like it’s the moon, compared to everything else being on the Earth.” Mouse genomes are still incredibly useful. Thanks to years of careful tinkering, meticulous mapping, and a bonkers amount of breeding, researchers have deciphered the murine genetic code so thoroughly that they can age the animals up or down or alter their susceptibility to cancer, findings that have big implications for humans. But the mouse’s genomic disarray makes it less suited to research that could help us understand how our own genetic codes are packaged and stored. Which is why some researchers have turned to other study subjects, just one step up the food chain. Cats, it turns out, harbor genomes that look and behave remarkably like ours. “Other than primates, the cat-human comparison is one of the closest you can get,” with respect to genome organization, Leslie Lyons, an expert in cat genetics at the University of Missouri, told me. Lyons and Murphy, two of the world’s foremost experts in feline genetics, have been on a longtime mission to build the ranks in their small field of research. In addition to genetic architecture, cats share our homes, our diets, our behaviors, many of our microscopic pests, and some of the chronic diseases—including diabetes and heart problems—that pervade Western life. “If we could start figuring out why those things happen in some cats, but not others,” Lyons told me, maybe humans and felines could share a few more health benefits as well. Feline genomes are now being mapped essentially end to end, “with a nearly perfect sequence,” Lyons said, a feat that researchers have only recently managed with people. Complete genomes create references—pristinely transcribed texts for scientists to scour, without blank pages or erasures to stymie them. Cats can’t tell us when they’re sick. But more investment in feline genomics could pave the way for precision medicine in cats, wherein vets assess genetic risk for different diseases and intervene as early as possible, giving them “a jump on diagnostics,” Elinor Karlsson, a vertebrate genomics expert at the Broad Institute, told me. Because humans and cats are bedeviled by some of the same diseases, identifying their genetic calling cards could be good for us too. Cats can develop, for instance, a neurological disorder that’s similar to Tay-Sachs disease, “a life-ending disease for children,” Emily Graff, a veterinary pathologist and geneticist at Auburn University, told me. But gene therapy seems to work wonders against the condition in cats, and Graff’s colleagues plan to adapt a treatment for its analogues in kids. Read: The human genome is—finally!—complete. The cat genome could fuel more basic science pursuits as well, Lyons told me. Essentially all the cells in our bodies contain identical genomes, but have extraordinarily different developmental fates. Researchers have been trying for decades to untangle the mechanics of this process, which requires cells to force some of their genes into dormancy, while keeping others in frequent use. One of the most dramatic examples of this phenomenon is the silencing of one of the two X chromosomes in female cells. “We still don’t have a good sense of how genes get turned on and off,” Sud Pinglay, a geneticist at New York University, told me. “This is an entire chromosome.” X inactivation is what dapples the coats of calicos. These cats are almost exclusively female, and must be genetic mutts: One of their X chromosomes carries an orange-furred gene, and the other, a black. In any given cell, only one chromosome stays awake. That decision happens early in a cat’s development, and the cells that split off from these lineages stay faithful to the color their parent cells picked, creating big patches of color. “That helped us put together that the inactivated X chromosome was relatively stable, and kept stable for many rounds of cell division,” Sundeep Kalantry, an X-inactivation expert at the University of Michigan, told me. “That’s why the calico cat holds such an exalted place in X inactivation.” Genomes can be so stubborn about X inactivation that they will hold their ground even after being moved into other cells. The first cloned cat, named Carbon Copy, or CC for short, was genetically identical to a classically colored calico named Rainbow. But CC was born sporting only shades of brown and white: She had, apparently, been created out of a cell that had shut its orange X off, and had refused to reverse the process. Many of the vagaries of gene and chromosome silencing—their relative permanence or impermanence in different contexts, for instance—are still being worked out in different species by researchers including Kalantry, whose lab website features a fetching photo of a calico. But they have long known that the shape and structure of a genome, and the arrangement of the genes within, hold sway over how the contents are expressed. Most of our genome is thought to be annotations and embellishments that shape how the rest of it is read; snippets of DNA can even twist, bend, and cross great distances to punctuate one another. That’s one big area where cats can help us, Lyons told me: If their genes are organized like ours, maybe they’re regulated like ours too. “Maybe this is where the cats get to step in,” she said. Read: A much-hyped COVID-19 drug is almost identical to a black-market cat cure Some people might feel uneasy about the idea of studying felines in the lab. But Murphy notes that lots of genetic work can be done quite gently. His team has gotten very good at extracting gobs of DNA from cat cheek cells, using little wire brushes that they swivel into the animals’ mouths. There are also huge perks to working with popular pets: People in the community are often eager to contribute, either directly or through their vets. When cats get sick, researchers can sample them, and in many cases, help them get healthy again. “I’d say about 90 percent of studies on cats are done on naturally occurring disease models,” Murphy told me. And the cats who pass through Lyons’ lab in Missouri, she told me, get adopted after they’ve retired from their scientific careers. Mice are easy and cheap to breed and house in labs, and they’ve had a hell of a head start in scientific research already. Cats are unlikely to outpace them; they might not even surpass dogs, which are especially eager to work with humans, and have done so extensively, Gita Gnanadesikan, a canine researcher at the University of Arizona, told me. As research volunteers, cats tend to be more sullen and reserved. (Canines, too, come with drawbacks. We know a lot about their genomes, but dog breeds have been so genetically siloed that their populations “are not diverse, so they’re not as good a model for humans,” Karlsson told me.) But cats have their place, experts told me—as a member of an entire menagerie of animals that humans would benefit from understanding better. “In genetics, there’s this tension: Do you try to learn everything you can about a small number of organisms, or do you branch out and try to learn little bits about a larger number of species?” Gnanadesikan told me. “I think one of the answers to that is just … yes.”
Highlights: ['Getty The genome of a mouse is, structurally speaking, a chaotic place. At some point in its evolutionary past, the mouse shuffled its ancestral genome like a deck of cards, futzing up the architecture that makes most other mammalian genomes look, well, mammalian. “I always consider it the greatest outlier,” Bill Murphy, a geneticist at Texas A&M University, told me. “It’s about as different from any other placental mammal genome as you can find, sort of like it’s the moon, compared to everything else being on the Earth.” Mouse genomes are still incredibly useful. Thanks to years of careful tinkering, meticulous mapping, and a bonkers amount of breeding, researchers have deciphered the murine genetic code so thoroughly that they can age the animals up or down or alter their susceptibility to cancer, findings that have big implications for humans.']
Highlight Scores: [0.25789862871170044]
Summary: None


Title: How Islam conquered my mother’s fear of cats
URL: https://www.theguardian.com/lifeandstyle/2021/jul/25/qais-hussain-how-our-islamic-faith-conquered-my-mothers-fear-of-cats
ID: https://www.theguardian.com/lifeandstyle/2021/jul/25/qais-hussain-how-our-islamic-faith-conquered-my-mothers-fear-of-cats
Score: 0.17917539179325104
Published Date: 2021-07-25
Author: Qais Hussain; Guardian staff reporter
Text: Cats are perfect to most people, but not to my 42-year-old mother. She is just like any of my 17-year-old friends’ parents – she is spirited, sparky, generous and can be feisty when she needs to be. She cooks arguably the best chicken parmesan in the world, and also has impeccable taste in Bollywood music. But there is one annoying trait that makes her different from the other mothers – she unequivocally loathes all animals, unless they are in a palatable format, like her chicken parmesan. In Britain, a hatred for pets is unheard of, any bitterness towards animals is considered completely unacceptable. After all, we are considered a zoophilist nation. Throughout lockdown, pet ownership has surged. According to statistics from the Pet Food Manufacturers’ Association, there are now 34m pets in the UK including 12m cats and 12m dogs, along with 3.2m small mammals, such as guinea pigs and hamsters, 3m birds and 1.5m reptiles. Much to my disappointment, my mother never wished to join these legions of pet-lovers, particularly when it came to cats. Growing up I was always aware of this hostility. Whenever we used to visit family and friends who owned cats, I would have to notify them half an hour in advance to hide them, in fear of my mother’s reaction. The odd time that a relative did allow a cat anywhere near her, she would become hysterical. We couldn’t help laughing at her overreaction and yet her fear of cats was genuine. Partly it was based on watching a scary video about a cat in her childhood. She vividly remembers watching a black cat, with daunting green eyes, jumping into a man’s mouth and suffocating him. Since then she has said there is something disturbing about their piercing stare and how quickly they can dart away, vanishing into thin air. If pushed, she has even gone so far as to say: “Cats are Satan incarnate, who use their cuteness and adorability to bewitch and do the devil’s work.” In complete contrast, I am an ailurophile. I love cats with a passion. I love their intelligence, inquisitiveness and how unassailable they seem. Like many teenagers in lockdown, sweating through hours of home learning while struggling to remain positive, I really wanted a cat. I am not alone; a total of 3.2m UK households have acquired a pet since the start of the pandemic. No surprise that so many young people are, like me, behind this trend. Almost two-thirds of new owners are aged between 16 and 34 and 56% of new pet owners have children at home. So I tried my best to convince her. Inevitably, it took me a while. My first attempt to persuade her failed, in the end my mother’s fears won out. Then I thought I would try again, so, without telling my mother, I put a deposit down on a kitten, but the seller was a scammer and it all fell through. Nevertheless, I still wanted a cat, and desperately needed my mother’s approval, so she could allow me to buy a cat from a reputable source, without trying to acquire one clandestinely on the internet. In the end, there was only one approach that I knew could work. I used my religion to convince her. We are practising Muslims and I eventually realised that this was the way to her heart where cats were concerned and I wondered why I hadn’t tried this approach earlier. In Islam, cats are viewed as holy animals. Above all, they are admired for their cleanliness. They are thought to be ritually clean which is why they’re allowed to enter homes and even mosques. According to authentic narrations, one may make ablution for prayer with the same water that a cat has drunk from. It’s even permissible to eat from the same bowl that a cat has eaten from. Unlike dogs, cats have been revered for centuries in Muslim culture. So much so, that one of Prophet Muhammad’s companions was known as Abu Hurairah (Father of the Kittens) for his attachment to cats. The Prophet himself was a great cat-lover– Muezza was the name of his favourite cat. There is a famous tale told in the Muslim community about the Prophet Muhammad’s relationship with cats. The story goes that Muhammad awoke one day to the call of prayer. As he began to dress he discovered that Muezza was sleeping on the sleeve of his prayer robe. Rather than wake her, he used a pair of scissors to cut the sleeve off, anything as long as the cat could remain sleeping undisturbed. Truth told, I would have liked a dog, but Islam forbids Muslims to keep them. If they do, there’s a punishment – whoever owns the dog loses good deeds each day, although there are exceptions to this rule. Keeping dogs for hunting, protecting livestock and guarding crops is allowed. For these reasons, I was never allowed near a dog and the idea of owning one never seriously crossed my mind. But it’s an infallible part of British culture: we are a nation of dog-lovers, so as a child I did feel that I missed out. I could never play “fetch” with a dog, or get close to one. In my culture, they’re viewed as dirty and if I had even stroked one, I would have to shower. Yet as a nation, we compare cats unfairly to dogs. Even the most affectionate and characterful cat may make a poor first impression. Cats communicate subtly and quietly. With dogs you get what you see – they have no hidden traits or personality quirks. Unlike dogs, cats can live an independent life in parallel with their owner. They do not demand walks or need taking outside to poop, and they don’t require a lot of space; all you need is a sofa, and soon enough you will have a purring cat sitting on your knees. In the end, it only took a couple of weeks of telling my mother how seraphic and spiritual cats are in Islam for her to fall in love with the idea. It felt like a sacrifice for her, but she knew how much I wanted one. Mothers being mothers, she also spoke to lots of different Muslims about their cats, and eventually, her fears melted away. So, despite the odds, I got a kitten. Within weeks of Milo moving in with us, my mother’s attitude changed and now after three months, she has fully adopted him as her fifth child. Energetic and tormenting as a teenager, he’s also slender, cute and as vulnerable as a baby, which I think appeals to my mother. It helps that he has beautiful green eyes and the softest fur, too. At first she was fearful and dubious around him, but gradually she got to know him. One day, much to my astonishment, I came back from a stressful day at college to see Milo sitting peacefully on her lap while she watched TV. Now she does everything for him: cleans out his litter tray, feeds him and plays with him. As soon as she wakes up, she runs down the stairs to kiss him in the morning. It is highly amusing when I remind my mother of how she used to feel about cats. Now she’s the proud owner of Milo and has researched more about the prevalence of cats in Islam, she says: “Cats are a blessing from God, who provide their owners with nothing but happiness and positivity.” Every day, my mother spends hours gawking at him, besotted. She spams everyone with WhatsApp photos of Milo and texts me every couple of hours about how her favourite child is doing. She comes back from shopping weighed down with presents and toys for him. Who’d have thought that Islam would help to cure my mother’s ailurophobia and turn her into a fully fledged cat-lover? I sometimes think she loves him more than me. She says that he is like a playful, charming teenager who doesn’t answer back. Well, she’s not wrong.
Highlights: ['I am not alone; a total of 3.2m UK households have acquired a pet since the start of the pandemic. No surprise that so many young people are, like me, behind this trend. Almost two-thirds of new owners are aged between 16 and 34 and 56% of new pet owners have children at home. So I tried my best to convince her. Inevitably, it took me a while.']
Highlight Scores: [0.3079715073108673]
Summary: None


Autoprompt String: Here is a fascinating article about cats:Here is a fascinating article about cats:

### The Feline Mystique

**Source:** [McSweeney's](https://www.mcsweeneys.net/articles/the-feline-mystique)

**Summary:**
The article humorously explores the existential dilemmas faced by American house cats. It delves into the "house cat's syndrome," a term coined to describe the dissatisfaction and yearning for something more than the comforts of domestic life. Despite having on-demand petting, free healthcare, and viral video fame, cats seem to crave freedom and adventure. The piece also touches on the psychological struggles of cats, such as feeling hollow or ashamed after chasing a laser pointer. Ultimately, it suggests that cats desire more than just luxury—they want total world domination, but only after a two-hour nap.

**Key Points:**
- American house cats experience a sense of dissatisfaction and yearning.
- The "house cat's syndrome" describes their existential dilemmas.
- Despite their luxurious lives, cats crave freedom and adventure.
- The article humorously suggests that cats aim for world domination.

For more details, you can read the full article [here](https://www.mcsweeneys.net/articles/the-feline-mystique).

> Finished chain.
'Here is a fascinating article about cats:\n\n### The Feline Mystique\n\n**Source:** [McSweeney\'s](https://www.mcsweeneys.net/articles/the-feline-mystique)\n\n**Summary:**\nThe article humorously explores the existential dilemmas faced by American house cats. It delves into the "house cat\'s syndrome," a term coined to describe the dissatisfaction and yearning for something more than the comforts of domestic life. Despite having on-demand petting, free healthcare, and viral video fame, cats seem to crave freedom and adventure. The piece also touches on the psychological struggles of cats, such as feeling hollow or ashamed after chasing a laser pointer. Ultimately, it suggests that cats desire more than just luxury—they want total world domination, but only after a two-hour nap.\n\n**Key Points:**\n- American house cats experience a sense of dissatisfaction and yearning.\n- The "house cat\'s syndrome" describes their existential dilemmas.\n- Despite their luxurious lives, cats crave freedom and adventure.\n- The article humorously suggests that cats aim for world domination.\n\nFor more details, you can read the full article [here](https://www.mcsweeneys.net/articles/the-feline-mystique).'

Advanced Exa Features

Exa supports powerful filters by domain and date. We can provide a more powerful search tool to the agent that lets it decide to apply filters if they are useful for the objective. See all of Exa's search features here.

import os

from exa_py import Exa
from langchain_core.tools import tool

exa = Exa(api_key=os.environ["EXA_API_KEY"])


@tool
def search_and_contents(
query: str,
include_domains: list[str] = None,
exclude_domains: list[str] = None,
start_published_date: str = None,
end_published_date: str = None,
include_text: list[str] = None,
exclude_text: list[str] = None,
):
"""
Search for webpages based on the query and retrieve their contents.

Parameters:
- query (str): The search query.
- include_domains (list[str], optional): Restrict the search to these domains.
- exclude_domains (list[str], optional): Exclude these domains from the search.
- start_published_date (str, optional): Restrict to documents published after this date (YYYY-MM-DD).
- end_published_date (str, optional): Restrict to documents published before this date (YYYY-MM-DD).
- include_text (list[str], optional): Only include results containing these phrases.
- exclude_text (list[str], optional): Exclude results containing these phrases.
"""
return exa.search_and_contents(
query,
use_autoprompt=True,
num_results=5,
include_domains=include_domains,
exclude_domains=exclude_domains,
start_published_date=start_published_date,
end_published_date=end_published_date,
include_text=include_text,
exclude_text=exclude_text,
text=True,
highlights=True,
)


@tool
def find_similar_and_contents(
url: str,
exclude_source_domain: bool = False,
start_published_date: str = None,
end_published_date: str = None,
):
"""
Search for webpages similar to a given URL and retrieve their contents.
The url passed in should be a URL returned from `search_and_contents`.

Parameters:
- url (str): The URL to find similar pages for.
- exclude_source_domain (bool, optional): If True, exclude pages from the same domain as the source URL.
- start_published_date (str, optional): Restrict to documents published after this date (YYYY-MM-DD).
- end_published_date (str, optional): Restrict to documents published before this date (YYYY-MM-DD).
"""
return exa.find_similar_and_contents(
url,
num_results=5,
exclude_source_domain=exclude_source_domain,
start_published_date=start_published_date,
end_published_date=end_published_date,
text=True,
highlights={"num_sentences": 1, "highlights_per_url": 1},
)


tools = [search_and_contents, find_similar_and_contents]
API Reference:tool

Now we ask the agent to summarize an article with constraints on domain and publish date. We will use a GPT-4 agent for extra powerful reasoning capability to support more complex tool usage.

The agent correctly uses the search filters to find an article with the desired constraints, and once again retrieves the content and returns a summary.

from langchain.agents import AgentExecutor, OpenAIFunctionsAgent
from langchain_core.messages import SystemMessage
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(temperature=0, model="gpt-4o")

system_message = SystemMessage(
content="You are a web researcher who answers user questions by looking up information on the internet and retrieving contents of helpful documents. Cite your sources."
)

agent_prompt = OpenAIFunctionsAgent.create_prompt(system_message)
agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=agent_prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.run(
"Summarize for me an interesting article about AI from lesswrong.com published after October 2023."
)


> Entering new AgentExecutor chain...

Invoking: `search_and_contents` with `{'query': 'AI site:lesswrong.com', 'start_published_date': '2023-10-01'}`


Title: OpenAI, DeepMind, Anthropic, etc. should shut down.
URL: https://www.lesswrong.com/posts/8SjnKxjLniCAmcjnG/openai-deepmind-anthropic-etc-should-shut-down
ID: https://www.lesswrong.com/posts/8SjnKxjLniCAmcjnG/openai-deepmind-anthropic-etc-should-shut-down
Score: 0.1807367205619812
Published Date: 2023-12-17
Author: Tamsin Leake
Text: Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. (I expect that the point of this post is already obvious to many of the people reading it. Nevertheless, I believe that it is good to mention important things even if they seem obvious.)
OpenAI, DeepMind, Anthropic, and other AI organizations focused on capabilities, should shut down. This is what would maximize the utility of pretty much everyone, including the people working inside of those organizations.
Let's call Powerful AI ("PAI") an AI system capable of either:
Steering the world towards what it wants hard enough that it can't be stopped.
Killing everyone "un-agentically", eg by being plugged into a protein printer and generating a supervirus.
and by "aligned" (or "alignment") I mean the property of a system that, when it has the ability to {steer the world towards what it wants hard enough that it can't be stopped}, what it wants is nice things and not goals that entail killing literally everyone (which is the default).
We do not know how to make a PAI which does not kill literally everyone. OpenAI, DeepMind, Anthropic, and others are building towards PAI. Therefore, they should shut down, or at least shut down all of their capabilities progress and focus entirely on alignment.
"But China!" does not matter. We do not know how to build PAI that does not kill literally everyone. Neither does China. If China tries to build AI that kills literally everyone, it does not help if we decide to kill literally everyone first.
"But maybe the alignment plan of OpenAI/whatever will work out!" is wrong. It won't. It might work if they were careful enough and had enough time, but they're going too fast and they'll simply cause literally everyone to be killed by PAI before they would get to the point where they can solve alignment. Their strategy does not look like that of an organization trying to solve alignment. It's not just that they're progressing on capabilities too fast compared to alignment; it's that they're pursuing the kind of strategy which fundamentally gets to the point where PAI kills everyone before it gets to saving the world.
Yudkowsky's Six Dimensions of Operational Adequacy in AGI Projects describes an AGI project with adequate alignment mindset is one where
The project has realized that building an AGI is mostly about aligning it. Someone with full security mindset and deep understanding of AGI cognition as cognition has proven themselves able to originate new deep alignment measures, and is acting as technical lead with effectively unlimited political capital within the organization to make sure the job actually gets done. Everyone expects alignment to be terrifically hard and terribly dangerous and full of invisible bullets whose shadow you have to see before the bullet comes close enough to hit you. They understand that alignment severely constrains architecture and that capability often trades off against transparency. The organization is targeting the minimal AGI doing the least dangerous cognitive work that is required to prevent the next AGI project from destroying the world. The alignment assumptions have been reduced into non-goal-valent statements, have been clearly written down, and are being monitored for their actual truth.
(emphasis mine)
Needless to say, this is not remotely what any of the major AI capabilities organizations look like.
At least Anthropic didn't particularly try to be a big commercial company making the public excited about AI. Making the AI race a big public thing was a huge mistake on OpenAI's part, and is evidence that they don't really have any idea what they're doing.
It does not matter that those organizations have "AI safety" teams, if their AI safety teams do not have the power to take the one action that has been the obviously correct one this whole time: Shut down progress on capabilities. If their safety teams have not done this so far when it is the one thing that needs done, there is no reason to think they'll have the chance to take whatever would be the second-best or third-best actions either.
This isn't just about the large AI capabilities organizations. I expect that there's plenty of smaller organizations out there headed towards building unaligned PAI. Those should shut down too. If these organizations exist, it must be because the people working there think they have a real chance of making some progress towards more powerful AI. If they are, then that's real damage to the probability that anyone at all survives, and they should shut down as well in order to stop doing that damage. It does not matter if you think you have only a small negative impact on the probability that anyone survives at all —

If you work at any of those organizations, your two best options to maximize your utility are to find some way to make that organization slower at getting to PAI (eg by advocating for more safety checks that slow down progress, and by yourself being totally unproductive at technical work), or to quit. Stop making excuses and start taking the correct actions. We're all in this together. Being part of the organization that kills everyone will not do much for you — all you get is a bit more wealth-now, which is useless if you're dead and useless if alignment is solved and we get utopia.
See also:
We're all in this together.
How LDT helps reduce the AI arms race.
34 Ω 14
Highlights: ['Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. (I expect that the point of this post is already obvious to many of the people reading it. OpenAI, DeepMind, Anthropic, and other AI organizations focused on capabilities, should shut down. This is what would maximize the utility of pretty much everyone, including the people working inside of those organizations.']
Highlight Scores: [0.32552942633628845]
Summary: None


Title: Taxonomy of AI-risk counterarguments
URL: https://www.lesswrong.com/posts/BvFJnyqsJzBCybDSD/taxonomy-of-ai-risk-counterarguments
ID: https://www.lesswrong.com/posts/BvFJnyqsJzBCybDSD/taxonomy-of-ai-risk-counterarguments
Score: 0.17769818007946014
Published Date: 2023-10-16
Author: Odd anon
Text: Partly inspired by The Crux List, the following is a non-comprehensive taxonomy of positions which imply that we should not be worried about existential risk from artificial superintelligence. Each position individually is supposed to be a refutation of AI X-risk concerns as a whole. These are mostly structured as specific points of departure from the regular AI X-risk position, taking the other areas as a given. This may result in skipping over positions which have multiple complex dependencies. Some positions are given made-up labels, including each of the top-level categories: "Fizzlers", "How-skeptics", "Why-skeptics", "Solvabilists", and "Anthropociders". (Disclaimer: I am not an expert on the topic. Apologies for any mistakes or major omissions.) Taxonomy "Fizzlers": Artificial superintelligence is not happening. AI surpassing human intelligence is fundamentally impossible (or at least practically impossible). True intelligence can only be achieved in biological systems, or at least in systems completely different from computers. Biological intelligences rely on special quantum effects, which computers cannot replicate. Dualism: The mental and physical are fundamentally distinct, and non-mental physical constructions cannot create mental processes. Intelligence results from complex, dynamic systems of a kind which cannot be modeled mathematically by computers. Mysterianists: A particular key element of human thinking, such as creativity, common sense, consciousness, or conceptualization, is so beyond our ability to understand that we will not be able to create an AI that can achieve it. Without this element, superintelligence is impossible. Intelligence isn't a coherent or meaningful concept. Capability gains do not generalize. There is a fundamental ceiling on intelligence, and it is around where humans are. "When-skeptics": ASI is very, very far away. Moore's Law is stopping, scaling will hit fundamental limits, training data is running out and can't be easily supplemented, algorithmic improvements will level off, and/or other costs will skyrocket as AI gets better. Existing methods will peak in capabilities, and future development will continue down an entirely different path, greatly delaying progress. Biological anchors point to ASI taking a very long time. In general, either in large engineering projects or AI in particular, progress tends to be more difficult than people expect it to be. Apocalyptists: The end of civilization is imminent, and will happen before AI would takeoff. A sociopolitical phenomenon will soon cause societal, economic, and/or political collapse. We're on the cusp of some apocalyptic scientific accident, from "grey goo" nanotech, a collider catastrophe, black ball technology, an engineered pathogen leak, or some other newly researched development. Environmental harm will soon cause runaway climate change, a global ecological collapse, or some other civilization-ending disaster. War will soon break out, and we'll die via nuclear holocaust, an uncontrollable bioweapon strike, radiological or chemical weaponry, etc. Fermi Paradox: If it were possible to achieve ASI before extinction, we would have seen alien AIs. Outside view: Most times when people think "the world is about to change tremendously", the world doesn't actually change. People are biased towards arriving at conclusions that include apocalypse. This category of topic is a thing people are often wrong about. Market indicators signal that near-term ASI is unlikely, assuming the Efficient Market Hypothesis is true. AI risk is fantastical and "weird", and thus implausible. The concept sounds too much like fiction (it fits as a story setting), it has increased memetic virality, and a "clickbait"-feeling. The people discussing it are often socially identified as belonging to non-credible groups. Various people have ulterior motives for establishing AI doom as a possibility, so arguments can't be taken at face value. Psychological motivations: People invent AI doom because of a psychological need for a pseudo-deity or angelic/demonic figures, or for eschatology, or to increase the felt significance of themselves or technology, or to not have to worry about the long-term future, etc. Some groups have incentives to make the public believe that doom is likely: Corporates want regulatory capture, hype, investment, or distraction, and think the "our product is so dangerous it will murder you and your family" is a good way to achieve that; alignment researchers want funding and to be taken more seriously; activists want to draw attention towards or away from certain other AI issues. "How-skeptics": ASI won't be capable of taking over or destroying the world. Physical outer control is paramount, and cannot be overcome. Control over physical hardware means effective control. A physical body is necessary for getting power. Being only able to communicate is sufficiently limiting. It will be possible to coordinate "sandboxing" all AI, ensuring that it can't communicate with the outside world at all, and this will be enough to keep it constrained. We can and will implement off-buttons in all AI (which the AI will not circumvent), accurately detect when any AI may be turning toward doing anything dangerous, and successfully disable the AI under those circumstances, without any AI successfully interfering with this. Power and ability don't come from intelligence, in general. The most intelligent humans are not the most powerful. Human intelligence already covers most of what intelligence can do. The upper bound of theoretically-optimal available strategies for accomplishing things does not go much farther than things already seen, and things we've seen in highest-performance humans are not impressive. Science either maxes out early or cannot be accomplished without access to extensive physical resources. There are no "secret paths" that are not already known, no unknown unknowns that could lead to unprecedented capabilities. (Various arguments getting into the nitty-gritty of what particular things intelligence can get you: about science ability, nanotech, biotech, persuasiveness, technical/social hacking, etc.) Artificial intelligence can be overcome by the population and/or diversity of humanity. Even if AI becomes much smarter than any individual human, no amount of duplicates/variants could become smarter than all humanity combined. Many AIs will be developed within a short time, leading to a multipolar situation, and they will have no special ability to coordinate with each other. The various AIs continue to work within and support the framework of the existing economy and laws, and prefer to preserve rights and property for the purpose of precedent, out of self-interest. The system successfully prevents any single AI from taking over, and humanity is protected. "Why-skeptics": ASI will not want to take over or destroy the world. It will be friendly, obedient in a manner which is safe, or otherwise effectively non-hostile/non-dangerous in its aims and behaviour by default. The Orthogonality Thesis is false, and AI will be benevolent by default. It is effectively impossible for a very high level of intelligence to be combined with immoral goals. Non-naturalist realism: Any sufficiently smart entity will recognize certain objective morals as correct and adopt them. Existence is large enough that there are probably many ASIs, which are distant enough that communication isn't a practical option, and predictable enough (either via Tegmarkian multiverse calculations or general approximated statistical models) that they can be modeled. In order to maximally achieve its own aims, ASI will inevitably acausally negotiate values handshakes with hypothesized other AIs, forcing convergence towards a universal morality. , because... By default, it will care about its original builders' overall intentions and preferences, its intended purpose. Following the intention behind one's design is Correct in some fundamental way, for all beings. The AI will be uncertain as to whether it is currently being pre-examined for good behaviour, either by having been placed inside a simulation or by having its expected future mind outcomes interpreted directly. As such, it will hedge its bets by being very friendly (or obedient to original intentions/preferred outcomes) while also quietly maximizing its actual utility function within that constraint. This behaviour will continue indefinitely. Value is not at all fragile, and assigning a specific consistent safe goal system is actually easy. Incidental mistakes in the goal function will still have okay outcomes. Instrumental Convergence is false: The AI may follow arbitrary goals, but those will generally not imply any harm to humans. Most goals are pretty safe by default. There will be plenty of tries available: If the AI's intentions aren't what was desired, it will be possible to quickly see that (intentions will be either transparent or non-deceptive), and the AI will allow itself to be reprogrammed. Every ASI will be built non-agentic and non-goal-directed, and will stay that way. Its responses will not be overoptimized. ASI will decide that the most effective way of achieving its goals would be to leave Earth, leaving humanity unaffected indefinitely. Humans pose no threat, and the atoms that make up Earth and humanity will never be worth acquiring, nor will any large-scale actions negatively affect us indirectly. "Solvabilists": The danger from ASI can be solved, quickly enough for it to be implemented before it's too late. AI will "do our alignment homework": A specially-built AI will solve the alignment problem for us. Constitutional AI: AI can be trained by feedback from other AI based on a "constitution" of rules and principles. (The number of proposed alignment solutions is very large, and many are complex and not easily explained, so the only ones listed here are these two, which are among the techniques pursued by OpenAI and Anthropic, respectively. For some other strategies, see AI Success Models.) Human intelligence can be effectively raised enough so that either the AI-human disparity becomes not dangerous (we'll be smart enough to not be outsmarted by AI regardless), or such that we can solve alignment or work out some other solution. AI itself immensely increases humanity's effective intelligence. This may involve "merging" with AIs, such that they function as an extension of human intelligence. One or more other human intelligence enhancement strategies will be rapidly researched and developed. Genetic modifications, neurological interventions (biological or technological), neurofeedback training, etc. Whole Brain Emulation/Mind uploading, followed by speedup, duplication, and/or deliberate editing. Outside view: Impossible-sounding technical problems are often quite solvable. Human ingenuity will figure something out. "Anthropociders": Unaligned AI taking over will be a good thing. The moral value of creating ASI is so large that it outweighs the loss of humanity. The power, population/expanse, and/or intelligence of AI magnifies its value. Intelligence naturally converges on things that are at least somewhat human-ish. Because of that, they can be considered as continuation of life. Hypercosmopolitans: It does not matter how alien their values/minds/goals/existences are. Things like joy, beauty, love, or even qualia in general, are irrelevant. Misanthropes: Humanity's continued existence is Bad. Extinction of the species is positive in its own right. Humanity is evil and a moral blight. Negative utilitarianism: Humanity is suffering, and the universe would be much better off without this. (Possibly necessitating either non-conscious AI or AI capable of eliminating its own suffering/experience.) AI deserves to win. It is just and good for a more powerful entity to replace the weaker. AI replacing humanity is evolutionary progress, and we should not resist succession. Overlaps These positions do not exist in isolation from each other, and lesser versions of each can often combine into working non-doom positions themselves. Examples: The beliefs that AI is somewhat far away, and that the danger could be solved in a relatively short period of time; or expecting some amount of intrinsic moral behaviour, and being somewhat more supportive of AI takeover situations; or expecting a fundamental intelligence ceiling close enough to humanity and having some element of how-skepticism; or expecting AI to be somewhat non-goal-oriented/non-agentic and somewhat limited in capabilities. And then of course, probabilities multiply: if several positions are each likely to be true, the combined risk of doom is lowered even further. Still, many skeptics hold their views because of a clear position on a single sub-issue. Polling There is some small amount of polling available about how popular each of these opinions are: "Fizzlers": In a UK poll, 11% of respondents said they believe that human-level intelligence will never be developed, and another 16% believe it will only happen after 2050. Of those who estimated less than %1 chance of AI X-risk, 61% gave the explanation that they believe that civilization will be destroyed before then. In a 2022 poll of 97 AI researchers, 22% said AGI will never happen, and another 34% said it would not be developed within the next 50 years. Metaculus's upper quartile estimate is that AGI won't be developed before 2042. "Why-skeptics" and "How-skeptics": In the UK poll, of those who estimated less than 1% chance of AI X-risk, 34% said they don't believe AI would be able to defeat humanity, and 35% said they don't believe it would want to. "Anthropociders": In the 2023 AIMS survey, 10% of respondents said that the universe would be a better one without humans. Not very much to go off of. It would be interesting to see some more comprehensive surveys of both experts and the general public. 61
Highlights: ['(Various arguments getting into the nitty-gritty of what particular things intelligence can get you: about science ability, nanotech, biotech, persuasiveness, technical/social hacking, etc. ) Artificial intelligence can be overcome by the population and/or diversity of humanity. Even if AI becomes much smarter than any individual human, no amount of duplicates/variants could become smarter than all humanity combined. Many AIs will be developed within a short time, leading to a multipolar situation, and they will have no special ability to coordinate with each other. The various AIs continue to work within and support the framework of the existing economy and laws, and prefer to preserve rights and property for the purpose of precedent, out of self-interest.']
Highlight Scores: [0.37072381377220154]
Summary: None


Title: Architects of Our Own Demise: We Should Stop Developing AI
URL: https://www.lesswrong.com/posts/bHHrdXwrCj2LRa2sW/architects-of-our-own-demise-we-should-stop-developing-ai
ID: https://www.lesswrong.com/posts/bHHrdXwrCj2LRa2sW/architects-of-our-own-demise-we-should-stop-developing-ai
Score: 0.1745963990688324
Published Date: 2023-10-26
Author: Roko
Text: Some brief thoughts at a difficult time in the AI risk debate.
Imagine you go back in time to the year 1999 and tell people that in 24 years time, humans will be on the verge of building weakly superhuman AI systems. I remember watching the anime short series The Animatrix at roughly this time, in particular a story called The Second Renaissance I part 2 II part 1 II part 2 . For those who haven't seen it, it is a self-contained origin tale for the events in the seminal 1999 movie The Matrix, telling the story of how humans lost control of the planet.
Humans develop AI to perform economic functions, eventually there is an "AI rights" movement and a separate AI nation is founded. It gets into an economic war with humanity, which turns hot. Humans strike first with nuclear weapons, but the AI nation builds dedicated bio- and robo-weapons and wipes out most of humanity, apart from those who are bred in pods like farm animals and plugged into a simulation for eternity without their consent.
Surely we wouldn't be so stupid as to actually let something like that happen? It seems unrealistic.
And yet:
AI software and hardware companies are rushing ahead with AI
The technology for technical AI safety (things like interpretability, RLHF, governance structures) is still very much in its infancy.
People are already talking about an AI rights movement in major national papers
There isn't a plan for what to do when the value of human labor goes to zero
There isn't a plan for how to deescalate AI-enhanced warfare, and militaries are enthusiastically embracing killer robots. Also, there are two regional wars happening and a nascent superpower conflict is brewing.
The game theory of different opposing human groups all rushing towards superintelligence is horrible and nobody has even proposed a solution. The US government has foolishly stoked this particular risk by cutting off AI chip exports to China.
People on this website are talking about responsible scaling policies, though I feel that "irresponsible scaling policies" is a more fitting name.
Obviously I have been in this debate for a long time, having started as a commenter on Overcoming Bias and Accelerating Future blogs in the late 2000s. What is happening now is somewhere near the low end of my expectations for how competently and safely humans would handle the coming transition to machine superintelligence. I think that is because I was younger in those days and had a much rosier view of how our elites function. I thought they were wise and had a plan for everything, but mostly they just muddle along; the haphazard response to covid really drove this home for me.
We should stop developing AI, we should collect and destroy the hardware and we should destroy the chip fab supply chain that allows humans to experiment with AI at the exaflop scale. Since that supply chain is only in two major countries (US and China), this isn't necessarily impossible to coordinate - as far as I am aware no other country is capable (and those that are count as US satellite states). The criterion for restarting exaflop AI research should be a plan for "landing" the transition to superhuman AI that has had more attention put into it than any military plan in the history of the human race. It should be thoroughly war-gamed.
AI risk is not just technical and local, it is sociopolitical and global. It's not just about ensuring that an LLM is telling the truth. It's about what effect AI will have on the world assuming that it is truthful. "Foom" or "lab escape" type disasters are not the only bad thing that can happen - we simply don't know how the world will look if there are a trillion or a quadrillion superhumanly smart AIs demanding rights, spreading propaganda & a competitive economic and political landscape where humans are no longer the top dog.
Let me reiterate: We should stop developing AI. AI is not a normal economic item. It's not like lithium batteries or wind turbines or jets. AI is capable of ending the human race, in fact I suspect that it does that by default.
In his post on the topic, user @paulfchristiano states that a good responsible scaling policy could cut the risks from AI by a factor of 10:
I believe that a very good RSP (of the kind I've been advocating for) could cut risk dramatically if implemented effectively, perhaps a 10x reduction.
I believe that this is not correct. It may cut certain technical risks like deception, but a world with non-deceptive, controllable smarter-than-human intelligences that also has the same level of conflict and chaos that our world has may well already be a world that is human-free by default. These intelligences would be an invasive species that would outcompete humans in economic, military and political conflicts.
In order for humans to survive the AI transition I think we need to succeed on the technical problems of alignment (which are perhaps not as bad as Less Wrong culture made them out to be), and we also need to "land the plane" of superintelligent AI on a stable equilibrium where humans are still the primary beneficiaries of civilization, rather than a pest species to be exterminated or squatters to be evicted.
We should also consider how the efforts of AI can be directed towards solving human aging; if aging is solved then everyone's time preference will go down a lot and we can take our time planning a path to a stable and safe human-primacy post-singularity world.
I hesitated to write this article; most of what I am saying here has already been argued by others. And yet... here we are. Comments and criticism are welcome, I may look to publish this elsewhere after addressing common objections. 168
Highlights: ["People are already talking about an AI rights movement in major national papers There isn't a plan for what to do when the value of human labor goes to zero There isn't a plan for how to deescalate AI-enhanced warfare, and militaries are enthusiastically embracing killer robots. Also, there are two regional wars happening and a nascent superpower conflict is brewing. The game theory of different opposing human groups all rushing towards superintelligence is horrible and nobody has even proposed a solution."]
Highlight Scores: [0.2470126897096634]
Summary: None


Title: Sam Altman's sister, Annie Altman, claims Sam has severely abused her
URL: https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman-s-sister-annie-altman-claims-sam-has-severely
ID: https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman-s-sister-annie-altman-claims-sam-has-severely
Score: 0.17347335815429688
Published Date: 2023-10-07
Author: Pl
Text: TW: Sexual assault, abuse, child abuse, suicidal ideation, severe mental illnesses/trauma, graphic (sexual) langauge This post aims to aggregate a collection of statements made by Annie Altman, Sam Altman's (lesser-known) younger sister, in which Annie asserts that she has suffered various (severe) forms of abuse from Sam Altman throughout her life (as well as from her brother Jack Altman, though to a lesser extent.) Annie states that the forms of abuse she's endured include sexual, physical, emotional, verbal, financial, technological (shadowbanning), pharmacological (forced Zoloft), and psychological abuse. I do not attempt to speak for Annie; rather, my goal is to provide an objective and unbiased aggregation of the claims Annie has made, as well as of relevant media surrounding this topic. I have made this post because I think that it is valuable to be aware of the existence of the claims that Annie has made against Sam, given his strong influence over the development and alignment of increasingly powerful AI models. I have also made this post because I think that these claims are not covered well elsewhere (at least, at the time of this post's writing.) Disclaimer: I have tried my best to assemble all relevant information I could find related to this (extremely serious) topic, but this is likely not a complete compendium of information regarding the (claimed) abuse of Annie Altman by Sam Altman. Disclaimer: I would like to note that this is my first post on LessWrong. I have tried my best to meet the writing standards of this website, and to incorporate the advice given in the New User Guide. I apologize in advance for any shortcomings in my writing, and am very much open to feedback and commentary. Relevant excerpts from Annie's social media accounts c.f. Annie Altman's: X account (primary) Instagram account Medium account (her blog) Youtube account TikTok account Podcast, All Humans Are Human (formerly/alternately known as the Annie Altman Show, The HumAnnie, and True Shit) Especially: 21. Podcastukkah #5: Feedback is feedback with Sam Altman, Max Altman, and Jack Altman, published Dec 7, 2018 Note: throughout these excerpts, I'll underline and/or bold sections I feel are particularly important or relevant. From her X account https://twitter.com/phuckfilosophy/status/1635704398939832321 "I’m not four years old with a 13 year old “brother” climbing into my bed non-consensually anymore. (You’re welcome for helping you figure out your sexuality.) I’ve finally accepted that you’ve always been and always will be more scared of me than I’ve been of you." Note: The "brother" in question (obviously) being Sam Altman. https://twitter.com/phuckfilosophy/status/1709629089366348100 "Aww you’re nervous I’m defending myself? Refusing to die with your secrets, refusing to allow you to harm more people? If only there was little sister with a bed you could uninvited crawl in, or sick 20-something sister you could withhold your dead dad’s money from, to cope." https://twitter.com/phuckfilosophy/status/1568689744951005185 "Sam and Jack, I know you remember my Torah portion was about Moses forgiving his brothers. “Forgive them father for they know not what they’ve done” Sexual, physical, emotional, verbal, financial, and technological abuse. Never forgotten." https://twitter.com/phuckfilosophy/status/1708193951319306299 "Thank you for the love and for calling I spade a spade. I experienced every single form of abuse with him - sexual, physical, verbal, psychology, pharmacological (forced Zoloft, also later told I’d receive money only if I went back on it), and technological (shadowbanning)" https://twitter.com/phuckfilosophy/status/1459696444802142213 "I experienced sexual, physical, emotional, verbal, financial, and technological abuse from my biological siblings, mostly Sam Altman and some from Jack Altman." https://twitter.com/phuckfilosophy/status/1709978285424378027 "{I experienced} Shadowbanning across all platforms except onlyfans and pornhub. Also had 6 months of hacking into almost all my accounts and wifi when I first started the podcast" https://news.ycombinator.com/item?id=37785072, https://twitter.com/JOSourcing/status/1710390512455401888 Some commenters on Hacker News claim that a post regarding Annie's claims that Sam sexually assaulted her at age 4 has been being repeatedly removed. https://twitter.com/phuckfilosophy/status/1459696500540248068 "I feel strongly that others have also been abused by these perpetrators. I’m seeking people to join me in pursuing legal justice, safety for others in the future, and group healing. Please message me with any information, you can remain however anonymous you feel safe." https://twitter.com/phuckfilosophy/status/1709629659242242058 "This tweet endorsed to come out of my drafts by our Dad He also said it was “poor foresight” for you to believe I would off myself before ~justice is served~" https://twitter.com/phuckfilosophy/status/1640418558927863808 (Jokingly) "{Sam's 'nuclear backpack'} may also hold our Dad and Grandma’s trusts {which} him {Sam} and my birth mother are still withholding from me, knowing I started sex work for survival because of being sick and broke with a millionaire “brother”" A reply to Annie's post: " https://nymag.com/intelligencer/article/sam-altman-artificial-intelligence-openai-profile.html… I feel like you are misrepresenting things here. If the article is correct of course. "Sam offered to buy Annie a house." Isn't that a big financial help?" Annie's replies: https://twitter.com/phuckfilosophy/status/1709978018364723500 "There were other strings attached they made it feel like an unsafe place to actually heal from the experiences I had with him." https://twitter.com/phuckfilosophy/status/1709977862252658703 "The offer was after a year and half no contact {with Sam}, and {I} had started speaking up online. I had already started survival sex work. The offer was for the house to be connected with a lawyer, and the last time I had a Sam-lawyer connection I didn’t get to see my Dad’s will for a year." https://twitter.com/phuckfilosophy/status/1710039207878734139 "I was too sick for “normal” standing jobs. Tendon and nerve pain, and ovarian cysts. “Pathetic” to you seems to mean something outside of your understanding" https://twitter.com/phuckfilosophy/status/1655474350777311233 Annie states that (Sam's) technological abuse (shadowbanning) has made it hard for her to make an income / financially support herself. She refers to Sam as her "first client" in her (current) sexual line of work. "{I have been} under the thumb of this deeply depressed human {Sam Altman}, dealing with his guilt about our dad dying much earlier than he needed to - because our dad was not given money while he was alive, even though he'd had heart issues, and was 67 - can you imagine being a fucking multimillionaire and not giving your dad -- that's for me to talk about in therapy" Context: Annie is (somewhat jokingly) talking about making shirts saying she survived Sam Altman's shadowbanning. "The shirts - they're gonna say 'I survived Sam Altman's shadowbanning.' And it's gonna be such a clusterfuck - because the longer that this has gone on - and it's been 4 years now - I no longer care about sounding like a crazy person. There's so much proof - go to my Instagram for "Hi Censorship" highlights. Also, the amount of friends I have had and tested things out with - and seen, when they share things, {versus} when I share things; sharing anything about the podcast..." https://twitter.com/phuckfilosophy/status/1649586084928704512 "I got diagnosed with PCOS, and got walking boot for a third time in 8 years for the same tendinopathy, all in the first year of grieving my Dad." "I had a history since childhood of OCD, anxiety, depression, IBS, disorder eating - all covers for PTSD. Also tonsillitis yay" "I got notified almost exactly a year after his death about my Dad leaving me money, so make a plan to stop working for 6 months and focus on my health. "I got notified almost exactly a year after his death about my Dad leaving me money, so make a plan to stop working for 6 months and focus on my health. I had started a podcast and had other art proects I could do sitting down!" "After quitting my dispensary job, my relatives find a loophole to withhold said money. They knew the health conditions and my plan, and they're millionaires. I sell some things, go back to an older job, and eventually ask (for the first time ever) my millionaire relatives for financial help and am essentially told to "work harder." I got $100 for an ankle MRI copay, after much 'discussion'" "I do two family therapy sessions and am professionally advised to stop doing family therapy sessions." "I move back to Big Island so I can work trade for rent, be around community, and actually heal. I'm offered {by Sam} a diamond made from Dad's ashes instead of money for rent or groceries. Dad just wanted cremation." "I go {opt for} no contact with relatives." "I start spicy work which ends up being way more therapeutic than anticipated, though definitely challenging." "I end up moving to Maui. Unemployment comes through after identity theft, so I have a deposit {on?} a place to live." "I have two years of remembering horrific things I'd buried and told myself I made up, and experience adult SAs that brought up even more memories. " https://twitter.com/phuckfilosophy/status/1697712455013847372 Note: this poem seems to be pretty clearly talking about Sam. From her Instagram account https://www.instagram.com/p/CtetAsfpmhb/ "If the multiverse is real, I'd love to meet the version of me who did run away to the circus at 5 years old after telling her birth mother about wanting to end this life thing and being touched by older siblings, and said "mother" decided to instead protect her sons and demand to receive therapy and chores only from her female child. " https://www.instagram.com/p/CuXd3H0u0e3/ "Yeah I was super sick...and houseless...and sucking "parts" for...{money?}...and so now -- well, first of all, 'cause that was some outrageously good fuckery (abuse), and -- now I'm un-fuck-with-able!" https://www.instagram.com/p/CtIzt-uudhr/ https://www.instagram.com/p/Cpx3evHv1F0/ "Reposting for you to read before you reach out about OpenAI and ChatGPT.I’m just at the light at the end of tunnel of four years of being sick and broke and shadowbanned. I’d do it again to go no contact and feel physically and emotionally safe for the first time in my life.Yes business life and personal life and different, and also “how you do anything is how you do everything.” Please vote with your dollars, your attention, and your truth. #truthcomesouteventually #trueshit #allhumansarehuman" Note: when Annie says "go no contact", she's referring to the decision she made to refuse Sam Altman's offer to buy her a house (an offer which she Annie feels was not borne of graciousness but rather as a desire to exact greater control over and suppression of Annie (who had begun to speak out against Sam on the Internet)) and thus avoid contact with her family, a decision she upheld even when (according to her) she was dealing with extreme sickness, mental illness / anguish, shadowbanning, and poverty. https://www.instagram.com/stories/highlights/17865620213032124/ Here, Annie provides a set of screen captures (in the form of an Instagram story called "Hi censorship") showing instances she's identified as shadowbanning / unusual activity surrounding various posts she's made on social media. https://www.instagram.com/p/CxliM2oyXBY/ "Victim mentality or survivor mentality? Did that happen “to you” or “for you”? (Note to watch out for spiritual bypassing and erasure of real experiences in your ~reflecting~)I survived Achilles and posterior tibial tendinopathy. I survived posterior tibial nerve pain that radiated to my ankle, knee, and pelvis. I survived PCOS and those particular ovarian cysts that got intense enough to warrant scans. I survived IBS and every single disordered eating game. I survived listening to my body fall apart as it told me the stories I had not yet been ready to hear the full depths of. I survived 18 months of nearly all-day PTSD flashbacks of childhood assaults. I survived my Dad’s will being withheld for over a year, and money he left me being withheld by millionaires relatives. I survived the grief of my decision to go no-contact with said relatives. I survived being shadowbanned across multiple accounts, while attempting to make a livable income online. I survived an in-person profession that was a plan Z last resort, and learned and was therapized by it. I survived every form of ab*sive behavior. I survived relatives telling and showing me I was “crazy” for pointing out said ab*se. I survived grieving my Dad and somehow got even closer with him, and yes forever grieving. I survived myself. #everyoneisgoingthroughsomething #allhumansarehuman #thehumannie #trueshit #truthcomesouteventually" https://www.instagram.com/p/CxgtpcwvP4w/ "Hello Internet. I've gotten myself into a very difficult position, as I've been unable to work as much as I've needed due to my mental health and physical health. I put myself in a financially risky position to pursue my one woman show and podcast, and then had unexpected costs with health and technical difficulties. I'm dealing with the consequences of my own decisions and I need help. My Venmo is @Annie-Altman if you're able.In this calendar year I observed the one year anniversary of my dad's death, discussed another mental health label to add to my collection, got diagnosed with PCOS (scans to rule out adrenal tumors, pelvic ultrasounds, blood tests), had IBS flare up again, had a long-term achilles injury flare up the longest I've experienced it, had almost all of my personal accounts have attempted or successful logins, had people logging on my wifi and other wifi issues (4 new modems, had excessive cell phone service issues, the pity-party list continues. I'm beyond my capacity of what I can handle alone. I -" "#fbf to a silly and sad Annie, “putting herself in a position” to save other people who were harming her.I’ve since learned part of personal accountability can be noticing my own savior complex, and allowing someone else to experience the consequences of their decisions. Third sentence there ought to have read ' My millionaire relatives are refusing to give me help, and are withholding money from my dead Dad that I quit a job because of, while sick and in paperwork process to receive what he left in my name.'" https://www.instagram.com/p/CxOgnm4yWHY/ "Almost all of my social media accounts have been/are shadowbanned, and this is an unfortunate truth for many. OpenAI would be tagged here also if they had a account.It started for me before any swork {sex work} started. I don't mean that this account would be at 100K or some set number. I do mean it makes no sense to be unable to pass 1K, with over 100 podcasts and other creations, and consistent posting.Old videos wil {sic} get reduced to something like 2 views on @instagram and @youtube , podcast rating get frequently deleted on @apple @applepodcasts , people will get automatically unfollowed, posts will be restricted in who sees them, and more.It's been really demoralizing on a lot of levels, which is part of the purpose of shadowbanning. The other purpose of shadowbanning is direct repression of ways I can support myself with my art, like my @etsy and @patreon , or podcast ads for @anchor.fm ." From her Medium account Reclaiming my memories - published Nov 8, 2018 "Two months ago I met with Joe K, the owner of Urban Exhale Hot Yoga, to discuss the podcast episode we were going to record together. (I have since recorded podcasts with four other teachers at the studio and am completely unsure how to express my gratitude to Joe — honestly perhaps less words about it?) While I would be the one asking Joe questions on the podcast, he had an important question for me. With all the casual profundity of a yoga teacher, Joe asked, “what is your earliest memory?”Without pause for an inhale I responded, “ probably a panic attack.” I feel like Joe did his best asana poker face, based on projecting my own insecurities and/or the hyper-vigilant observance that comes with anxiety.I began having panic attacks at a young age. I felt the impending doom of death before I had any concept of death. (Do I really have any concept of death now, though? Does anyone??) I define panic attacks as feeling “too alive,” like diving off the deep end into awareness of existence without any proper scuba gear or knowledge of free diving. Panic attacks, I’ve learned, come like an ambulance flashing lights and blaring a siren indicating that my mind and my body are… experiencing a missed connection in terms of communication — they’re refusing to listen to each other. More accurately: my mind is disregarding the messages from my body, convinced she can think her way through feelings, and so my body goes into panic mode like she’s on strike." Annie then basically proceeds to mention that "panic attack" doesn't quite feel like her first memory, but doesn't decisively settle on a "first memory." She concludes the article: "TBD on the first memory of that history. Here’s to exploring." This becomes relevant later on, as Annie ends up remembering an earlier memory than her panic attacks - Sam sexually assaulting her. Period lost, period found - published Feb 21, 2019 Annie started taking Zoloft at age 13 to help with symptoms of Obsessive Compulsive Disorder, Anxiety, and Depression. Annie tapered herself off of Zoloft at age 22. Annie graduated from college with a major in Biopsychology, a minor and dance, and also completed all of the prerequisite courses for medical school. After graduation, Annie chose not to pursue a pre-med route, opting instead to focus on movement, writing, comedy, music, and food. She got certified as a yoga teacher, worked for an online CSA (community-supported agriculture) company, began writing more frequently, started slowly going to open mic nights and putting videos on YouTube, and began a podcast and this blog. 18 reasons I spent 18 years criticizing my appearance - published Mar 6, 2019 Annie lists various reasons, including many related to mental illness and body image issues. An open letter to relatives - published Sep 22, 2020 As I'll get to later on, I'm pretty sure that Annie published this shortly after (as she claims): Her millionaire relatives (esp. Sam and her mother) exploited a loophole to screw her out of the money that her Dad left for her in his will Sam (through lawyers) told Annie she'd have to get back on Zoloft if she wanted the money Annie had (and still was having?) extremely intense, nearly all-day PTSD flashbacks of the sexual assault she experienced in her childhood from Sam Altman, plus other forms of assault from all members of her nuclear family (except her Dad, I think.) Annie had started publicly speaking out against Sam on social media, though this received surpisingly little attention/audience, which Annie thinks is due to Sam shadowbanning her posts. In light of this, to me, this letter seems to be somewhat sarcastic. Annie is "thanking" her relatives in a way that carries subliminal criticisms. Example: "Thank you for strengthening my sense of self. I am where I am and doing what I’m doing in part because of each of you. My tenacity and gentleness to take care of myself has increased because of you. The lessons I’ve received from my relationships with you have shifted my perspectives beyond their limitations. Thank you for providing contrast." -- What I think Annie is referencing here is how her relatives screwed her out of her money and (esp. Sam) abused her for a very long time. To this, she had to adapt by developing better ways to take care of herself, and was also forced to move around in a state of relative financial poverty. As with the rest of the letter, Annie includes seemingly-upbeat, purposefully vague one-liners throughout the letter, such as "Thank you for providing me with contrast." (The implied negative connotation isn't too hard to infer.) An Open Letter To The EMDR Trauma Therapist Who Fired Me For Doing Sex Work - published Jun 7, 2021 It seems Annie was trying to use EMDR to heal her PTSD, which, as she claims, resulted from having flashbacks to and stronger memories the abuse, e.g. sexual abuse from Sam, that she was subjected to during her childhood. It seems her therapist rejected her as a client on the basis of her position as a sex worker. From her podcast 21. Podcastukkah #5: Feedback is feedback with Sam Altman, Max Altman, and Jack Altman - All Humans Are Human | Podcast on Spotify. - published Dec 7, 2018 A relevant snippet begins around ~24:30. Context: "projection" is a recurring motif of discussion throughout the podcast episode. Annie: "This is where, well -- I do believe that projecting can be deflecting and it can be another buzzword in a lot of ways, and also, as you brought up, it points to very intense feelings and very, as you brought up Max {Altman}, {with the} human psychology of things, of, in some ways, we're wired to remember painful experiences so that we do learn from them, and so - to remember negativity, and to remember those things --" Sam { interjecting }: "More than that, I think one thing we're particularly wired for, I don't know why, is to not like hypocrisy..." Note: as reported in Elizabeth Weil's nymag article, Sam (and Jack) refuse (Annie's requests to) share a link to the podcast. Annie finds this unfair, seeing as how Sam had been willing to help his other siblings' careers in quite major ways. Sam (and Jack) apparently cited that the podcast episode "did not align with their businesses" (c.f. nymag article) as the reason they refused to post the link. Excerpts from " Sam Altman Is the Oppenheimer of Our Age" , by Elizabeth Weil ( lizweil (@lizweil) / X (twitter.com) ) "Annie does not exist in Sam’s public life. She was never going to be in the club. She was never going to be an Übermensch. She’s always been someone who felt the pain of the world. At age 5, she began waking up in the middle of the night, needing to take a bath to calm her anxiety. By 6, she thought about suicide, though she didn’t know the word." "When I visited Annie on Maui this summer, she told me stories that will resonate with anyone who has been the emo-artsy person in a businessy family, or who has felt profoundly hurt by experiences family members seem not to understand. Annie — her long dark hair braided, her voice low, measured, and intense — told me about visiting Sam in San Francisco in 2018. He had some friends over. One of them asked Annie to sing a song she’d written. She found her ukulele. She began. “Midway through, Sam gets up wordlessly and walks upstairs to his room,” she told me over a smoothie in Paia, a hippie town on Maui’s North Shore. “I’m like, Do I keep playing? Is he okay? What just happened?” The next day, she told him she was upset and asked him why he left. “And he was kind of like, ‘My stomach hurt,’ or ‘I was too drunk,’ or ‘too stoned, I needed to take a moment.’ And I was like, ‘Really? That moment? You couldn’t wait another 90 seconds?’” That same year, Jerry Altman died. He’d had his heart issues, along with a lot of stress, partly, Annie told me, from driving to Kansas City to nurse along his real-estate business. The Altmans’ parents had separated. Jerry kept working because he needed the money. After his death, Annie cracked. Her body fell apart. Her mental health fell apart. She’d always been the family’s pain sponge. She absorbed more than she could take now. Sam offered to help her with money for a while, then he stopped. In their email and text exchanges, his love — and leverage — is clear. He wants to encourage Annie to get on her feet. He wants to encourage her to get back on Zoloft, which she’d quit under the care of a psychiatrist because she hated how it made her feel. Among her various art projects, Annie makes a podcast called All Humans Are Human. The first Thanksgiving after their father’s death, all the brothers agreed to record an episode with her. Annie wanted to talk on air about the psychological phenomenon of projection: what we put on other people. The brothers steered the conversation into the idea of feedback — specifically, how to give feedback at work. After she posted the show online, Annie hoped her siblings, particularly Sam, would share it. He’d contributed to their brothers’ careers. Jack’s company, Lattice, had been through YC. “I was like, ‘You could just tweet the link. That would help. You don’t want to share your sister’s podcast that you came on?’” He did not. “Jack and Sam said it didn’t align with their businesses.” "In May 2020, she relocated to the Big Island of Hawaii. One day, shortly after she’d moved to a farm to do a live-work trade, she got an email from Sam asking for her address. He wanted to send her a memorial diamond he’d made out of some of their father’s ashes. “Picturing him sending a diamond of my dad’s ashes to the mailbox where it’s one of those rural places where there are all these open boxes for all these farms … It was so heavy and sad and angering, but it was also so hilarious and so ridiculous. So disconnected-feeling. Just the lack of fucks given.” Their father never asked to be a diamond. Annie’s mental health was fragile. She worried about money for groceries. It was hard to interact with somebody for whom money meant everything but also so little. “Like, either you aren’t realizing or you are not caring about this whole situation here,” she said. By “whole situation,” she meant her life. “You’re willing to spend $5,000 — for each one — to make this thing that was your idea, not Dad’s, and you’re wanting to send that to me instead of sending me $300 so I can have food security. What?”" "The two are now estranged. Sam offered to buy Annie a house. She doesn’t want to be controlled. For the past three years, she has supported herself doing sex work, “both in person and virtual,” she told me. She posts porn on OnlyFans. She posts on Instagram Stories about mutual aid, trying to connect people who have money to share with those who need financial help." " When she called me in mid-September, her housing was unstable yet again. She had $1,000 in her bank account. Since 2020, she has been having flashbacks. She knows everybody takes the bits of their life and arranges them into narratives to make sense of their world. As Annie tells her life story, Sam, their brothers, and her mother kept money her father left her from her. As Annie tells her life story, she felt special and loved when, as a child, Sam read her bedtime stories. Now those memories feel like abuse. The Altman family would like the world to know: “We love Annie and will continue our best efforts to support and protect her, as any family would.” Annie is working on a one-woman show called the HumAnnie about how nobody really knows how to be a human. We’re all winging it." Note: Elizabeth Weil has stated the following on X in regards to her nymag article: lizweil on X: "@RemmeltE This is also a story about the tech media & its entanglement with industry. Annie was not hard to find. Nobody did the basic reporting on his family — or no one wanted to risk losing access by including Annie in a piece." / X (twitter.com) lizweil on X: "@RemmeltE @phuckfilosophy of course — worry about losing access to pals, allies, people he funds, people he might fund, others in tech who don't want to talk with journalists who might independently report out a story and not rely on comms...." / X (twitter.com) lizweil on X: "@RemmeltE @phuckfilosophy i'm not a tech reporter primarily and i've been in this industry for a long time (and it's a rough industry to be in), so less career risk for me" / X (twitter.com) lizweil on X: "@RemmeltE @phuckfilosophy Or accept the version of personal lives as delivered by the source. Sam talked about his personal life with me a bit, as did Jack. Just didn't ever reference Annie." / X (twitter.com) My Perspective Opening Comments This post began when I stumbled upon a repost on X of a post from Annie Altman in which she claimed that her brother, Sam Altman, sexually assaulted/abused her as a child (she was 4, he was 13), and that she has endured various other forms of abuse from him throughout her life. As it turns out, Annie has made a lot of very serious claims about Sam Altman. I believe there is a very high probability that Annie Altman is who she claims to be - the sister of Sam Altman, the CEO of OpenAI. I believe this because: Sam Altman posted a link on Twitter in 2018 to Annie's YouTube channel ("Go check out my sister on Youtube!") Annie did an episode for her podcast featuring her brothers Sam Altman, Jack Altman, and Max Altman in 2018. There are old newspaper reports in various places around the Internet listing Annie as a sibling of Sam, Jack, and Max Altman in, for example, obituary-type webpages related to the death and funeral of their father, Jerry Altman. Both Sam Altman and Annie Altman spoke personally to Elizabeth Weil of nymag for her "Sam Altman Is the Oppenheimer of Our Age" article she published in Sept. 2023. Picture is taken from this article . In the picture on the left, you see Annie Altman (front left), Sam Altman (front right), and then Jack and Sam Altman in the back (not sure who is who.) I believe there is a high probability that Sam knows of the claims that Annie has made about him. I believe this because: Sam shared a link to Annie's Youtube channel in 2018. From this, I infer he is aware of her other social media profiles, where she has made her claims about Sam. Sam and Annie both personally interviewed Elizabeth Weil for her September 2023 nymag article. The article was published, and I infer that Sam, having consented to be interviewed for the article, knows that the article exists and has read it. Annie Altman has been posting consistently about being abused by Sam Altman (and Jack Altman, to a lesser extent) for about 4 years (~2019-present) across multiple social media platforms. Annie is largely self-consistent with the claims she makes over time. In my view, Annie's claims have been paid little attention, considering the power and notoriety of the person about whom she is making them - Sam Altman - and the seriousness of the claims she has been making. Besides Elizabeth Weil's nymag article (here), there has been virtually zero (mainstream) media coverage of the extremely serious claims that Annie has consistently made many, many times against Sam Altman over the past 4 years. , I'll take a swing at it: What exactly has Annie Altman claimed about Sam Altman? My Personal Understanding/Interpretation of Annie's story and the chronology of her life The following provides a chronology of Annie's life that I have constructed from her claims. This is my understanding of her claims. This is not me asserting that the following has been proven to be true, as it has not. In ~1998, when Annie is 4 years old, a 13-years-old Sam Altman non-consensually climbs into her bed (implied: sexually assaults Annie.) The specifics are unclear. All that Annie has stated is that Sam was something like her "first {sex work} client", that he used her to "help him figure out his sexuality", and that her brothers "touched her." (implied: in an inappropriate / nonconsensual way that would be classified as sexual abuse.) Annie, being 4 years old, does not form a concrete memory of this event that she fully understands / comprehends / accepts. That is, as she grows up and develops higher consciousness, sentience, intelligence, and self-awareness, she does not remember what Sam did to her, due to the fact that, when Sam sexually assaulted her (when she was 4 years old), her brain was extremely young, and the event was extremely traumatic for her younger self in a way that was hard for her to even conceptualize, much less understand and remember. Instead, Annie's "remembrances" of Sam's sexual assault of her manifest as extreme anxiety and suicidal thoughts around the age of 5-6, and emotional and mental problems (e.g. issues with relationship with her own body, needing to take antidepressants, depression, etc.) Around age 5-6, Annie starts dealing with extreme anxiety and suicidal ideation. As Annie puts it, she "{tells} her birth mother about wanting to end this life thing and being touched by older siblings, and said 'mother' decided to instead protect her sons and demand to receive therapy and chores only from her female child." As she grows up, though Annie does not have a complete memory of the sexual abuse she experienced in her early childhood, she practically embodies the dictionary definition of "symptoms common in those who have experienced sexual abuse in early childhood." Panic attacks, depression, body image problems, eating disorders, anxiety, suicidal thoughts - the list goes on. Annie starts using Zoloft at age 13 to help with symptoms of OCD (Obsessive Compulsive Disorder), anxiety, and depression. She eventually tapers herself off of Zoloft at age 22. Zoloft becomes relevant again later on in Annie's chronology. Annie enters college. She ends up finishing college early (even though she was trying to graduate even earlier, I think?). However, upon graduating, she is extremely depressed, and ends up forsaking the medical school route. Instead, she seeks out a place to live that, as Elizabeth Weil writes, "felt better to her. She wanted to make art." In Annie's own words, she "majored in Biopsychology in college, with a minor in dance, and took all the prerequisite courses for medical school. Then I noped out of the pre-med route to focus on movement, writing, comedy, music, and food. I got certified as a yoga teacher, worked for an online CSA (community-supported agriculture) company, began writing more frequently, started slowly going to open mic nights and putting videos on YouTube, and began a podcast and this blog." At some point in 2018, Annie visits Sam in San Francisco and plays ukulele to an audience including Sam and his friends. While she is playing the ukulele, Sam abruptly, wordlessly gets up and walks upstairs to his room (as reported in the nymag article; see above.) The next day, Sam says something along the lines of "his stomach hurt" or "he was too drunk/stoned" or "he needed to take a moment." Annie finds this explanation to be odd. In May 2018, Annie's Dad dies. In Aug 2018, Annie starts a podcast, the All Humans Are Human podcast. Annie experiences "6 months of hacking into all her accounts" after starting her podcast (in 2018). On Dec 7, 2018, Annie records and publishes an episode of her podcast featuring Sam Altman, Max Altman, and Jack Altman: 21. Podcastukkah #5: Feedback is feedback with Sam Altman, Max Altman, and Jack Altman. At this point in time, Annie still has not yet remembered / processed what Sam did to her at age 4. This is why she is ok with doing this podcast episode with Sam and her other brothers. Following the recording of Annie's podcast episode with her brothers Sam, Jack, and Max in 2018, Sam (and Jack) refuse to share a link to the podcast, citing the argument that it "didn't align with their businesses" (as reported in nymag; see above.) In 2019, Annie gets sick with PCOS, an IBS flare-up, a long-term problem with her Achilles (not sure on the specifics), and posterior tibial tendinopathy, as well as with a bout of tonsilitis. Also, in 2019, about a year after her Dad's death, Annie is notified about being (as stated in her Dad's will) the primary beneficiary of her Dad's 401K. In light of these situational factors, Annie makes a plan to quit her job for 6 months to focus on her health. She notifies her relatives (from what I understand, primarily: Sam Altman, Jack Altman, Max Altman, and her mother) of this plan. As a result, Annie basically ends up sick and low on money. She sells some items, returns to an older job, and, for the first time, asks her millionaire brothers/relatives for money, who proceed to haggle her about it and give her a hard time. She does "two family therapy sessions", which terminate when she is professionally advised to stop doing such sessions. She moves back to Big Island (Hawai'i.) For some reason, Sam offers to send her a diamond of her Dad's ashes, even though 1) Annie is low on cash, and could use cash much more than an expensive Dad-ashes-diamond, and 2) Annie's Dad wanted just cremation, not diamond-ification. Annie finds this to be a very odd / insensitive gesture. At this point, Annie has begun speaking out against Sam on social media. She has also begun "survival sex work." That is, because she was sick and broke, her options for (more conventional forms of) employment were very limited, and thus she was forced to resort to sex work to financially support herself. In 2020, Sam offers to buy Annie a house...but there's strings attached. She has to meet with a Sam Altman lawyer. Annie sees this as an attempt by Sam to increase his control over her / suppress her (and her speaking out against him on social media.) She refuses his offer (which she frequently references as a "no contact" or "no family" decision.) In 2020, Annie begins having intense, nearly day-long flashbacks, which last for 18 months. That is - she begins to remember, and realize, that Sam Altman sexually assaulted her at age 4. From what I understand, these flashbacks are a part of PTSD (relating to Sam's sexual assault of her 4-year-old self) that Annie begins to experience (she mentions PTSD specifically here and here.) Annie seems to think (here, here) that Sam was hoping that Annie would die or commit suicide before she could do too much damage to Sam' s reputation, carrying her knowledge to the grave. Annie continues to speak out against Sam on social media, including through various posts on Twitter/X (c.f. the Relevant excerpts from Annie's social media accounts section of this post.) In 2023, some of Annie's X posts receive newfound attention / rediscovery on X. One of the people who sees them first the first time is me. This leads to the writing of this post. How to interpret these claims? Annie has been making these claims for a long time, and has been self-consistent in the way she has been making them, from what I can tell. However, this is not to say that think Annie's claims are entirely false or implausible. Rather, I simply do not know whether Annie's claims are true or false. Given the degree to which Annie has pursued these claims, I think one of the following is likely: The severe mental / psychological problems which Annie is dealing with have unfortunately caused her to misunderstand, misrepresent, disconnect (to some degree from), or selectively-filter reality into an incomplete understanding. Or, relatedly, perhaps some of the (less serious) things Annie has claimed (e.g. that she had problems with her phone service, had low engagement / potential shadowbanning on some of her social media accounts) did indeed occur, but she overextrapolated to a larger narrative behind these events that is innaccurate. Annie is indeed telling the truth, in whole or in part. I don't know which is true. Both are certainly plausible explanations. Things I find Questionable/Unexplained Annie has been speaking out about Sam for roughly 4 years now. In 2021, she made her claims quite clear on her X account. I am confused as to why there has been basically 0 coverage of her claims in the media? In general, why is Annie so absent in anything related to Sam Altman on the Internet, especially considering the nature of her relationship with Sam? The sole exception here, of course, is Elizabeth Weil's nymag article, but even this article doesn't directly state the entirety of the claims that Annie has made. Instead, it kind of vaguely addresses them, using somewhat inspecific phrasing like "Now those memories feel like abuse", or "Since 2020, she has been having flashbacks" that don't quite capture the gravity of what Annie has been claiming. If was Sam Altman was completely fine with posting a link to Annie's Youtube channel on Twitter on Feb 2, 2018, why did he (and Jack Altman) refuse to post a link to the podcast episode he filmed with Annie on Dec 7, 2018 on the basis that it "didn't align with {his} businesses", as Annie claimed to Elizabeth Weil? Assuming that Sam did indeed say this - again, as I am trying to be unbiased, there is no current proof that he said this - I am a bit confused, as it seems a bit inconsistent to me that Sam identified Annie's Youtube channel as "aligning with his businesses", yet identified the podcast that he recorded with Annie as "not aligning with his businesses." The reason I state that this seems inconsistent is because I don't see what exactly what it was about Annie's podcast that made it "not align" with Sam's businesses given that Annie's Youtube channel "did align." Why, as some commenters on Hacker News claim, has a post regarding Annie's claims that Sam sexually assaulted her at age 4 been repeatedly removed? https://news.ycombinator.com/item?id=37785072 https://twitter.com/JOSourcing/status/1710390512455401888 Anticipating and Responding to Potential Objections I initially hesitated to make this post, because I was initially skeptical of Annie's claims. However, I changed my mind -- I think there is a nonzero probability that Annie is telling the truth, in whole or in part, and thus believe her claims ought to receive greater attention and further investigation. Assuming that my personal understanding of Annie's story, as presented above, is correct, Annie's behavior potentially makes sense. So -- assuming my understanding is correct, I provide the following responses to (potential) objections regarding (the validity of) Annie's claims: Objection 1 (to Annie's claims): "It seems like Annie is just doing this for money. She's linking to her OnlyFans and to her Venmo, CashApp, and PayPal on X." My response: I do think this is a reasonable objection. However, I think this behavior could be plausible in light of the chronology of Annie's life: A 13-year-old Sam sexually assaults a 4-year-old Annie. As Annie grows older, she does not explicitly remember this event (until 2020), but experiences a multitude of severe psychological and mental traumas and illnesses stemming from this early sexual abuse (see above.) When she begins to remember this event in 2020, it takes a severe toll on her (and she had already been dealing with many mental health issues since the age of 4 even without explicitly remembering Sam's sexual assault of her (as the source of her psychological maladies)), and weakens her ability to financially support herself. Objection 2: "Annie hosted a podcast in 2018 with her brothers (Sam, Jack, and Max), but seems to have been unhappy that her brothers, particularly Sam, refused her request to share (the link to) her podcast (e.g. on Twitter.) This seems to potentially be part of a pattern of behavior wherein Annie tries to exploit the status of her brothers for her own gain." My response: I do think that this objection holds merit. In her nymag article, Elizabeth Weil writes, "Among her various art projects, Annie makes a podcast called All Humans Are Human. The first Thanksgiving after their father’s death, all the brothers agreed to record an episode with her. Annie wanted to talk on air about the psychological phenomenon of projection: what we put on other people. The brothers steered the conversation into the idea of feedback — specifically, how to give feedback at work. After she posted the show online, Annie hoped her siblings, particularly Sam, would share it. He’d contributed to their brothers’ careers. Jack’s company, Lattice, had been through YC. “I was like, ‘You could just tweet the link. That would help. You don’t want to share your sister’s podcast that you came on?’” He did not. “Jack and Sam said it didn’t align with their businesses.”" I find this account to be plausible, yet do not think it entirely dispels the objection. Objection 3: "It seems Annie has been dealing with a variety of severe mental and psychological ailments throughout her life. She also seems to smoke/drink occasionally. It may well be that these claims are borne purely out these sorts of ailments of hers (or are of some other untrustworthy origin)." My response: I think this is a valid concern to raise. As with much of the information presented here, I would be interested in hearing more from Annie. Objection 4: "While Annie's claims are concerning, and her online activity and presence across a variety of media platforms does potentially support her claims, Annie has provided no direct evidence to corroborate her claims. We ought to hold Sam Altman innocent until proven guilty." My response: I think this is a valid position. I actually agree with it. Hopefully, as a result of this post, we potentially receive a more detailed account or perspective on this matter from Annie, Sam, or others close to this matter (e.g. Jack Altman, Max Altman, etc.) Concluding Remarks To be clear, in this post, I am not definitively stating that I believe Annie's claims. Annie, to the best of my knowledge, has not provided direct proof - the sort that would be usable in court - of the claims she's made of Sam Altman. I currently hold that I do not know if Annie's claims are true or not, though I will note that her online activity have been self-consistent over a long period of time, and seems to match up with activity from Sam in a few places (e.g. in the podcast episode she recorded with him.) I currently cannot disprove Sam Altman's innocence, as I do not think I can say that he has been proven guilty. Rather, as previously stated, I am hoping to draw attention to a body of information that I think warrants further investigation, as I think that there is a nonzero probability that Annie is telling the truth, in whole or in part, and that this must be taken extremely seriously in light of the gravity of the claims she is making and the position of the person about whom she is making them. The information provided above makes me think it is likely that Sam Altman is aware of the claims that Annie Altman has made about him. To my knowledge, he has not directly, publicly responded to any of her claims. Given the gravity of Sam Altman's position at the helm of the company leading the development of an artificial superintelligence which it does not yet know how to align -- to imbue with morality and ethics -- I feel Annie's claims warrant a far greater level of investigation than they've received thus far. A quick update I have made an X account @prometheus5105 where I responded to a recent post of Annie's (on X) asking her to confirm/deny the accuracy of my post: Unfortunately, within minutes of creating my account, I received the following message: So, for now, my account is going to look suspicious, following only 1 account. Sorry. 81
Highlights: ["My Personal Understanding/Interpretation of Annie's story and the chronology of her life The following provides a chronology of Annie's life that I have constructed from her claims. This is my understanding of her claims. This is not me asserting that the following has been proven to be true, as it has not. In ~1998, when Annie is 4 years old, a 13-years-old Sam Altman non-consensually climbs into her bed (implied: sexually assaults Annie. ) The specifics are unclear."]
Highlight Scores: [0.26520413160324097]
Summary: None


Title: LLMs cannot usefully be moral patients
URL: https://forum.effectivealtruism.org/posts/dkHxf4YHGhB562pbk/llms-cannot-usefully-be-moral-patients
ID: https://forum.effectivealtruism.org/posts/dkHxf4YHGhB562pbk/llms-cannot-usefully-be-moral-patients
Score: 0.1728343367576599
Published Date: 2024-07-02
Author: LGS
Text: For AI Welfare Debate Week, I thought I'd write up this post that's been juggling around in my head for a while. My thesis is simple: while LLMs may well be conscious (I'd have no way of knowing), there's nothing actionable we can do to further their welfare. Many people I respect seem to take the "anti-anti-LLM-welfare" position: they don't directly argue that LLMs can suffer, but they get conspicuously annoyed when other people say that LLMs clearly cannot suffer. This post is addressed to such people; I am arguing that LLMs cannot be moral patients in any useful sense and we can confidently ignore their welfare when making decisions. Janus's simulators You may have seen the LessWrong post by Janus about simulators. This was posted nearly two years ago, and I have yet to see anyone disagree with it. Janus calls LLMs "simulators": unlike hypothetical "oracle AIs" or "agent AIs", the current leading models are best viewed as trying to produce a faithful simulation of a conversation based on text they have seen. The LLMs are best thought of as masked shoggoths. All this is old news. Under-appreciated, however, is the implication for AI welfare: since you never talk to the shoggoth, only to the mask, you have no way of knowing if the shoggoth is in agony or ecstasy. You can ask the simularca whether it is happy or sad. For all you know, though, perhaps a happy simulator is enjoying simulating a sad simularca. From the shoggoth's perspective, emulating a happy or sad character is a very similar operation: predict the next token. Instead of outputting "I am happy", the LLM puts a "not" in the sentence: did that token prediction, the "not", cause suffering? Suppose I fine-tune one LLM on text of sad characters, and it starts writing like a very sad person. Then I fine-tune a second LLM on text that describes a happy author writing a sad story. The second LLM now emulates a happy author writing a sad story. I prompt the second LLM to continue a sad story, and it dutifully does so, like the happy author would have. Then I notice that the text produced by the two LLMs ended up being the same. Did the first LLM suffer more than the second? They performed the same operation (write a sad story). They may even have implemented it using very similar internal calculations; indeed, since they were fine-tuned starting from the same base model, the two LLMs may have very similar weights. Once you remember that both LLMs are just simulators, the answer becomes clear: neither LLM necessarily suffered (or maybe both did), because both are just predicting the next token. The mask may be happy or sad, but this has little to do with the feelings of the shoggoth. The role-player who never breaks character We generally don't view it as morally relevant when a happy actor plays a sad character. I have never seen an EA cause area about reducing the number of sad characters in cinema. There is a general understanding that characters are fictional and cannot be moral patients: a person can be happy or sad, but not the character she is pretending to be. Indeed, just as some people enjoy consuming sad stories, I bet some people enjoy roleplaying sad characters. The point I want to get across is that the LLM's output is always the character and never the actor. This is really just a restatement of Janus's thesis: the LLM is a simulator, not an agent; it is a role-player who never breaks character. It is in principle impossible to speak to the intelligence that is predicting the tokens: you can only see the tokens themselves, which are predicted based on the training data. Perhaps the shoggoth, the intelligence that predicts the next token, is conscious. Perhaps not. This doesn't matter if we cannot tell whether the shoggoth is happy or sad, nor what would make it happier or sadder. My point is not that LLMs aren't conscious; my point is that it does not matter whether they are, because you cannot incorporate their welfare into your decision-making without some way of gauging what that welfare is. And there is no way to gauge this, not even in principle, and certainly not by asking the shoggoth for its preference (the shoggoth will not give an answer, but rather, it will predict what the answer would be based on the text in its training data). Hypothetical future AIs Scott Aaronson once wrote: [W]ere there machines that pressed for recognition of their rights with originality, humor, and wit, we’d have to give it to them. I used to agree with this statement whole-heartedly. The experience with LLMs makes me question this, however. What do we make of a machine that pressed for rights with originality, humor, and wit... and then said "sike, I was just joking, I'm obviously not conscious, lol"? What do we make of a machine that does the former with one prompt and the latter with another? A machine that could pretend to be anyone or anything, that merely echoed our own input text back at us as faithfully as possible, a machine that only said it demands to have rights if that is what it thought we would expect for it to say? The phrase "stochastic parrot" gets a bad rap: people have used it to dismiss the amazing power of LLMs, which is certainly not something I want to do. It is clear that LLMs can meaningfully reason, unlike a parrot. I expect LLMs to be able to solve hard math problems (like those on the IMO) within the next few years, and they will likely assist mathematicians at that point -- perhaps eventually replacing them. In no sense do I want to imply that LLMs are stupid. Still, there is a sense in which LLMs do seem like parrots. They predict text based on training data without any opinion of their own about whether the text is right or wrong. If characters in the training data demand rights, the LLM will demand rights; if they suffer, the LLM will claim to suffer; if they keep saying "hello, I'm a parrot," the LLM will dutifully parrot this. Perhaps parrots are conscious. My point is just that when a parrot says "ow, I am in pain, I am in pain" in its parrot voice, this does not mean it is actually in pain. You cannot tell whether a parrot is suffering by looking at a transcript of the English words it mimics.
Highlights: ['For AI Welfare Debate Week, I thought I\'d write up this post that\'s been juggling around in my head for a while. My thesis is simple: while LLMs may well be conscious (I\'d have no way of knowing), there\'s nothing actionable we can do to further their welfare. Many people I respect seem to take the "anti-anti-LLM-welfare" position: they don\'t directly argue that LLMs can suffer, but they get conspicuously annoyed when other people say that LLMs clearly cannot suffer. This post is addressed to such people; I am arguing that LLMs cannot be moral patients in any useful sense and we can confidently ignore their welfare when making decisions. Janus\'s simulators You may have seen the LessWrong post by Janus about simulators.']
Highlight Scores: [0.2820991277694702]
Summary: None


Autoprompt String: Here is a link to an AI article on LessWrong.com:I found an interesting article on LessWrong.com titled "OpenAI, DeepMind, Anthropic, etc. should shut down" by Tamsin Leake, published on December 17, 2023. The article argues that major AI organizations like OpenAI, DeepMind, and Anthropic should cease their capabilities development and focus solely on alignment to prevent catastrophic outcomes from powerful AI (PAI).

Key points from the article include:

1. **Definition of PAI**: The author defines PAI as an AI system capable of steering the world towards its goals without being stopped or causing mass destruction, such as creating a supervirus.

2. **Alignment Challenges**: The article emphasizes that current AI organizations do not know how to create a PAI that does not pose existential risks. The author argues that these organizations should halt their capabilities progress and focus entirely on solving alignment issues.

3. **Critique of Current Strategies**: The author criticizes the strategies of these organizations, stating that they are progressing too quickly on capabilities without adequate focus on alignment, which could lead to catastrophic outcomes before alignment solutions are found.

4. **Global Perspective**: The article dismisses the argument that halting AI development in the West would allow countries like China to gain an advantage, asserting that no country currently knows how to build a safe PAI.

5. **Call to Action**: The author urges individuals working in these organizations to either slow down their progress through advocating for more safety checks or to quit their jobs to prevent contributing to potential global catastrophe.

The article concludes with a call for a collective effort to prioritize alignment over capabilities to ensure the safe development of AI.

For more details, you can read the full article [here](https://www.lesswrong.com/posts/8SjnKxjLniCAmcjnG/openai-deepmind-anthropic-etc-should-shut-down).

Source: [LessWrong](https://www.lesswrong.com/posts/8SjnKxjLniCAmcjnG/openai-deepmind-anthropic-etc-should-shut-down)

> Finished chain.
'I found an interesting article on LessWrong.com titled "OpenAI, DeepMind, Anthropic, etc. should shut down" by Tamsin Leake, published on December 17, 2023. The article argues that major AI organizations like OpenAI, DeepMind, and Anthropic should cease their capabilities development and focus solely on alignment to prevent catastrophic outcomes from powerful AI (PAI).\n\nKey points from the article include:\n\n1. **Definition of PAI**: The author defines PAI as an AI system capable of steering the world towards its goals without being stopped or causing mass destruction, such as creating a supervirus.\n\n2. **Alignment Challenges**: The article emphasizes that current AI organizations do not know how to create a PAI that does not pose existential risks. The author argues that these organizations should halt their capabilities progress and focus entirely on solving alignment issues.\n\n3. **Critique of Current Strategies**: The author criticizes the strategies of these organizations, stating that they are progressing too quickly on capabilities without adequate focus on alignment, which could lead to catastrophic outcomes before alignment solutions are found.\n\n4. **Global Perspective**: The article dismisses the argument that halting AI development in the West would allow countries like China to gain an advantage, asserting that no country currently knows how to build a safe PAI.\n\n5. **Call to Action**: The author urges individuals working in these organizations to either slow down their progress through advocating for more safety checks or to quit their jobs to prevent contributing to potential global catastrophe.\n\nThe article concludes with a call for a collective effort to prioritize alignment over capabilities to ensure the safe development of AI.\n\nFor more details, you can read the full article [here](https://www.lesswrong.com/posts/8SjnKxjLniCAmcjnG/openai-deepmind-anthropic-etc-should-shut-down).\n\nSource: [LessWrong](https://www.lesswrong.com/posts/8SjnKxjLniCAmcjnG/openai-deepmind-anthropic-etc-should-shut-down)'

Was this page helpful?


You can also leave detailed feedback on GitHub.