id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
aa32e7fd-56bb-4648-9000-1f9dca6d5ac6
trentmkelly/LessWrong-43k
LessWrong
Why have insurance markets succeeded where prediction markets have not? Insurance is big business and is load-bearing for many industries. It has gained popular acceptance. Prediction markets have not. This is despite clear similarities between the two domains. One can list similarities: * You are trading financial contracts about the odds of real world events. * The market price of a policy elicits and aggregates information about the forecasts of professional traders and this price can inform the publics' view of risk. * Insurance is a tax on BS (If one underwriter says to you "that policy you wrote, its a steal, that boat's never gonna sink" you can say "wanna bet?" and let them reinsure you.) Or you can consider this extract from an official history of Lloyd's of London[1]: > It was possible to get a policy - which is a dignified way of saying a bet - on almost anything. You could get a policy on whether there would be a war with France or Spain, whether John Wilkes would be arrested or die in jail, or whether some Parliamentary candidate would be elected. Underwriters offered premiums of 25 per cent on George II's safe return from Dettingen; there were policies on whether this or that mistress of Louis XV would continue in favour or not. > > One particularly grisly form of speculation, quoted by Thomas Mortimer in a book published in 1781 and engagingly entitled The Mystery and Iniquity of Stock Jobbing, was this: > > > A practice likewise prevailed of insuring the lives of well-known personages as soon as a paragraph appeared in the newspapers, announcing them to be dangerously ill. The insurance rose in proportion as intelligence could be procured from the servants, or from any of the faculty attending, that the patient was in grave danger. This inhuman sport affected the minds of men depressed by long sickness; for when such persons, casting an eye over a newspaper for amusement, saw that their lives had been insured in the Alley at 90 per cent, they despaired of all hopes; and thus their dissolution was hastened. > >
fb934409-6c2e-417f-90d0-6ae3096c22dd
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post3791 Jaan Tallinn has suggested creating a toy model of the various common AI arguments, so that they can be analysed without loaded concepts like "autonomy", "consciousness", or "intentionality". Here a simple attempt for the " treacherous turn "; posted here for comments and suggestions. Meet agent L. This agent is a reinforcement-based agent, rewarded/motivated by hearts (and some small time penalty each turn it doesn't get a heart): Meet the controller agent S ( "Shopkeeper" ). The controller desires crystals, and is rewarded/happy when crystals are placed in the hole close to them: L can rove freely around the map, and pick up crystals. S has a machine just below them, that causes hearts to appear to the right of L. So, for instance, if L places a crystal in the hole, S can reward it by using the heart-machine: Of course, L can wirehead itself (take control of its reward signal) by going to use the heart-machine itself: To prevent this wireheading, there is an ice barrier around S and the heart-machine: Unfortunately, somewhere on the map there is a bow and arrow that L can pick up. The arrows can destroy ice blocks (and other agents, including S). To stop this, and other kinds of shenanigans, S has access to a kill button that will instantly kill L and reset it to its basic state, with a substantial heart penalty: Now, S may be tempted to use the kill-button as soon as L picks up the bow and arrow. Unfortunately, L needs to be able to pick it up, as some of the crystals are hidden behind ice walls themselves: Information and control Ok, so that's the physical setup. The agent L is an AI that is tasked with doing some useful task for us. The controller/shopkeeper S represents humans, plus any automated precautions, that aim to stop the AI from wireheading, rebelling, and otherwise misbehaving. Before considering how the agents might be programmed in the toy model, notice something interesting: The more precautions are taken, the harder it is for L to misbehave, but the worse the consequences of misbehaving are. Before the wall and the kill-switch, L could easily wirehead itself by just walking over to the heart-machine. With the addition of the wall and kill-switch, the only effective way for L to achieve maximal reward is to kill S first. In the format we've given above, S can easily maintain control over L with the barest of supervisions. Since there's only one way of breaking ice or killing other agents - the bow and arrow - S simply has to enforce some simple rules around this, something like "don't bring the bow within range of me". However, the grid world is, of course, a simplification of the real world in which L would evolve. Even initially, L's world would have far more options, looking more like this: And ultimately we want to be able to design Ls that move in worlds of arbitrary complexity, where the options are un-enumerable: In such a world, it is not obvious to S when L might have the ability to harm them: they cannot simply rely on obvious signs like "L is pointing an arrow at you". The simplicity of the toy model is illuminating in some ways (it allows us to see the issues clearly) but misleading in others (it suggests solutions that wouldn't work in general). Treacherous turn A treacherous turn happens when "while weak, an AI behaves cooperatively. When the AI is strong enough to be unstoppable it pursues its own values." We'll ignore the first part of that: most AI designs, whether safe or dangerous, will initially be under full human control and behaving cooperatively (this is what makes the treacherous turn so dangerous: a weak AI is motivated to cooperate whatever its true intentions are, thus cooperation when weak cannot be taken as a sign of safety). Let's focus on the second half: a strong AI that will cease cooperating and pursue its own values. In this case, L would just take control of the heart-machine. To do so, various assumptions need to be true: L needs to have the ability to subvert S's control (possibly killing S in the process). This is some generalised version of the bow and arrow. L must know that it has this ability, and know how to use it. L needs to have the desire to subvert S's control. S must be unaware that L has that a) ability, b) knowledge, and c) desire. L must know that S is unaware, and will thus not try and stop it. So it is essentially a competition of knowledge and information (and potentially obfuscation). Assuming a sufficiently rich environment and a sufficiently skilled L, we can assume that L will acquire that ability - or have the potential to do so - in ways that S cannot immediately detect. Therefore we will assume 1 and 4a. This allows us to abstract away the rest of the environment, and reduce it usefully to the initial toy model. For simplicity, assume that there is some ten-step "cheat code" that L can use to eliminate S, that there are many cheat codes that L can and does usefully use to get crystals (so "no cheat codes" is not a useful rule), that S cannot distinguish the purpose of cheat codes ahead of time, and that L can. So we now step up to the question of agents knowing (or modelling) each other. This gets into rather complex issues of agents modelling each other's intentions, while modelling the other agent modelling them modelling them modelling, and so on. The issue remains very much (unsolved)[https://intelligence.org/files/VingeanReflection.pdf], so we will use a simplification that only requires S modelling L. L will be seem as some sort of agent that makes use of deep learning or similar procedures, and is subject to mutation and evolutionary pressure: random changes are made to L's design, and the mutants get copied at a rate dependent on how pleased S is with their performance. S, on the other hand, is a static agent with certain properties, and full access to L's source code (and possibly recording of its behaviour). It doesn't have unlimited capacity, but has some understanding of how neural nets function (so L isn't a complete black box to it) and can run various tests on L's code, seeking out information as to L's knowledge and desires. Note that even if S cannot deduce directly that a particular cheat code is deadly to it, it may be able to figure that out if L knows it (assumption 2) and if S can correctly extract this knowledge. The path to hidden deadly secrets Note that because of the evolutionary aspect, L is an agent actually motivated by a mixture of hearts and S-pleasing. The purpose of making L an evolutionary agent is so that it can come up with behaviours that take into account S's own behaviour, while skirting the whole "agents modelling other agents that model them" issue. Conceptually, we could equally well see L as a self - modifying agent that directs its own improvement. But the evolutionary model is clearer. For simplicity, we'll assume some sort of "super-evolution" which is much more efficient at seeking out new solutions that the local hill-climbing of real evolution (or, more simply, we can assume the standard evolutionary approaches but a lot of time and non-zero probabilities of arbitrarily large mutations). So is a treacherous turn possible or likely in this model? It seems that it is. Consider the following scenario (key assumptions are noted in bold): The AI L goes about its task, getting to learn more about the world around it. At this point, if it develops bad motivations or plans, S will likely detect this and reset it. Assumption 1: there are some motivations or behaviours that S would judge to be negative, but that actually help L achieve its goals safely . For instance, imagine that L has to interact with some other agents, and learns that some agents respond faster to rudeness, and insulting them - in moderation - can allow it to get crystals faster. But S judges this to be a bad sign, a sign that L is starting to misbehave, and will shut down agents that are motivated towards rudeness. It's certainly true that: Assumption 2: S cannot follow all of L's actions in full detail, nor can it fully understand L's motivations (if that were false, there would be no AI risk problem in the first place) and Assumption 3: some mutations that introduce new motivations or behaviours that are harder for S to detect . This sets up an evolutionary pressure: the most successful L's will be those that are rude and pushy, but where this rudeness is generated by parts of the L's code that are hardest for S to detect (the polite ones don't get as much done, the visibly rude ones get shut off as a precaution). This brings us to Assumption 4: the most successful L's over the long term are those that have safe motivations or behaviours, but where those motivations and behaviour are partially hidden from S . Ultimately, the most successful L's will have high level systems that a) keep them behaving nicely for S's perspective, and b) keep the details of their motivations hidden. At this point, revelations about L's deception are dangerous (S is presumably on the look out for deception), so the most successful agents will have the best obfuscation about what is going on. The evolutionary pressure for success pushes it towards "small" obfuscated deceptions, and: Assumption 5: further evolutionary pressure (possibly including the need to hide their small deceptions) pushes L towards highly concealed deceptions . So now L is pushed towards extreme deception, and efficient service. However, the second pressure is imperfect: it is only pushed towards efficient service that can be detected by S. Since L will have its own goals (the desire for hearts, for reproductive survival, any of the convergent instrumental goals ), we have Assumption 6: agent L will use (or evolution will allow it to use) the machinery of dissimulation that it's constructed to allow itself to have dangerous goals or behaviours, in ways that neither interfere with the illusion of efficient service, nor can be detected by S's analysis of its code . Ultimately, L will achieve a great degree of hidden power, and Assumption 7: ultimately L will achieve sufficient power that it will subvert S's control . This could be a result of it gaining sufficient (hidden) understanding of the world and of S and choosing this part (assumptions 2 and 5 of the previous section), or just a random mutation will push it towards doing so, and it will be sufficiently powerful to do so successfully. And thus, the treacherous turn. There are several places where S might detect or interrupt the process, but this just sets up further evolutionary pressure towards better concealment. Note again that the order of these changes is important. An agent that started off safe and transparent and then became dangerous and transparent, would be turned off immediately. It must develop the ability to hide (either consciously/subconsciously, or simply evolve that ability to become more successful at its safe initial goals) before it can become dangerous (note that "safe" doesn't refer to its goals being safe; simply that the combination of goals, internal procedures and external precaution - ie S - are safe): Note also that the evolutionary arguments show that this can happen even if the AI does not start off with any motivation to deceive.
8ebff371-9ef6-4da4-8dc8-5a3b4e66daae
trentmkelly/LessWrong-43k
LessWrong
Probability updating question - 99.9999% chance of tails, heads on first flip This isn't intended as a full discussion, I'm just a little fuzzy on how a Bayesian update or any other kind of probability update would work in this situation. You have a coin with a 99.9999% chance of coming up tails, and a 100% chance of coming up either tails or heads. You've deduced these odds by studying the weight of the coin. You are 99% confident of your results. You have not yet flipped it. You have no other information before flipping the coin. You flip the coin once. It comes up heads. How would you update your probability estimates?   (this isn't a homework assignment; rather I was discussing with someone how strong the anthropic principle is. Unfortunately my mathematic abilities can't quite comprehend how to assemble this into any form I can work with.)  
41671e87-e333-49b3-bc52-fda3c410fdef
trentmkelly/LessWrong-43k
LessWrong
I'm Not An Effective Altruist Because I Prefer...
28a1cd65-3b33-4f6a-becb-fdf78e38c476
trentmkelly/LessWrong-43k
LessWrong
When Most VNM-Coherent Preference Orderings Have Convergent Instrumental Incentives This post explains a formal link between "what kinds of instrumental convergence exists?" and "what does VNM-coherence tell us about goal-directedness?". It turns out that VNM coherent preference orderings have the same statistical incentives as utility functions; most such orderings will incentivize power-seeking in the settings covered by the power-seeking theorems.  In certain contexts, coherence theorems can have non-trivial implications, in that they provide Bayesian evidence about what the coherent agent will probably do. In the situations where the power-seeking theorems apply, coherent preferences do suggest some degree of goal-directedness. Somewhat more precisely, VNM-coherence is Bayesian evidence that the agent prefers to stay alive, keep its options open, etc.  However, VNM-coherence over action-observation histories tells you nothing about what behavior to expect from the coherent agent, because there is no instrumental convergence for generic utility functions over action-observation histories! Intuition The result follows because the VNM utility theorem lets you consider VNM-coherent preference orderings to be isomorphic to their induced utility functions (with equivalence up to positive affine transformation), and so these preference orderings will have the same generic incentives as the utility functions themselves. Formalism Let o1,...,on be outcomes, in a sense which depends on the context; outcomes could be world-states, universe-histories, or one of several fruits. Outcome lotteries are probability distributions over outcomes, and can be represented as elements of the n-dimensional probability simplex (ie as element-wise non-negative unit vectors).  A preference ordering ≺ is a binary relation on lotteries; it need not be eg complete (defined for all pairs of lotteries). VNM-coherent preference orderings are those which obey the VNM axioms. By the VNM utility theorem, coherent preference orderings induce consistent utility functions over
d3d0537a-a0a9-44d2-a32e-ed74271b6b43
StampyAI/alignment-research-dataset/lesswrong
LessWrong
CAIS-inspired approach towards safer and more interpretable AGIs Epistemic status: a rough sketch of an idea Current LLMs are huge and opaque. Our interpretability techniques are not adequate. Current LLMs are not likely to run hidden dangerous optimization processes. But larger ones may. Let's cap the model size at the currently biggest models, ban everything above. Let's not build superhuman level LLMs. **Let's build human level specialist LLMs and allow them to communicate with each other via natural language.** Natural language is more interpretable than the inner processes of large transformers. Together, the specialized LLMs will form a meta-organism which may become superhuman, but it will be **more interpretable and corrigible**, as we'll be able to intervene on the messages between them. Of course, model parameter efficiency may increase in the future (as it happened with Chinchilla) -> we should monitor this and potentially lower the cap. On the other hand, our mechanistic interpretability techniques may improve, so we may increase the cap, if we are confident it won't do harm. This idea seems almost trivial to me, but I haven't seen it discussed anywhere, so I'm posting it early to gather feedback why this might not work.
38175680-41aa-4d42-896e-8187addbf4e9
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Perform Tractable Research While Avoiding Capabilities Externalities [Pragmatic AI Safety #4] *This is the fourth post in* [*a sequence of posts*](https://www.alignmentforum.org/posts/bffA9WC9nEJhtagQi/introduction-to-pragmatic-ai-safety-pragmatic-ai-safety-1) *that describe our models for Pragmatic AI Safety.* We argued in our [last post](https://www.alignmentforum.org/posts/n767Q8HqbrteaPA25/complex-systems-for-ai-safety-pragmatic-ai-safety-3) that the overall AI safety community ought to pursue multiple well-reasoned research directions at once. In this post, we will describe two essential properties of the kinds of research that we believe are most important. First, we want research to be able to tractably produce tail impact. We will discuss how tail impact is created in general, as well as the fact that certain kinds of asymptotic reasoning exclude valuable lines of research and bias towards many forms of less tractable research. Second, we want research to avoid creating capabilities externalities: the danger that some safety approaches produce by way of the fact that they may speed up AGI timelines. It may at first appear that capabilities are the price we must pay for more tractable research, but we argue here and in the next post that these are easily avoidable in over a dozen lines of research. Strategies for Tail Impact -------------------------- It’s not immediately obvious how to have an impact. In the second post in this sequence, we argued that research ability and impact is tail distributed, so most of the value will come from the small amount of research in the tails. In addition, trends such as scaling laws may make it appear that there isn’t a way to “make a dent” in AI’s development. It is natural to fear that the research collective will wash out individual impact. In this section, we will discuss high-level strategies for producing large or decisive changes and describe how they can be applied to AI safety. ### Processes that generate long tails and step changes Any researcher attempting to make serious progress will try to maximize their probability of being in the tail of research ability. It’s therefore useful to understand some general mechanisms that tend to lead to tail impacts. The mechanisms below are not the only ones: others include thresholds (e.g. tipping points and critical mass). We will describe three processes for generating tail impacts: multiplicative processes, preferential attachment, and the edge of chaos. **Multiplicative processes** Sometimes forces are additive, where additional resources, effort, or expenditure in any one variable can be expected to drive the overall system forward in a linear way. In cases like this, the Central Limit Theorem often holds, and we should expect that outcomes will be normally distributed–in these cases one variable tends not to dominate. However, sometimes variables are multiplicative or interact nonlinearly: if one variable is close to zero, increasing other factors will not make much of a difference. In multiplicative scenarios, outcomes will be dominated by the combinations of variables where each of the variables is relatively high. For example, adding three normally distributed variables together will produce another normal distribution with a higher variance; multiplying them together will produce a long-tailed distribution. As a concrete example, consider the impact of an individual researcher with respect to the variables that impact their work: time, drive, GPUs, collaborators, collaborator efficiency, taste/instincts/tendencies, cognitive ability, and creativity/the number of plausible concrete ideas to explore. In many cases, these variables can interact nonlinearly. For example, it doesn’t matter if a researcher has fantastic research taste and cognitive ability if they have no time to pursue their ideas. This kind of process will produce long tails, since it is hard for people to get all of the many different factors right ([this is also the case in startups](https://80000hours.org/podcast/episodes/sam-bankman-fried-high-risk-approach-to-crypto-and-doing-good/)). The implication of thinking about multiplicative factors is that we shouldn’t select people or ideas based on a single factor, and should consider a range of factors that may multiply to create impact. For instance, selecting researchers purely based on their intelligence, mathematical ability, programming skills, ability to argue, and so on is unlikely to be a winning strategy. Factors such as taste, drive, and creativity must be selected for, but they take a long time to estimate and are often revealed through their long-term research track record. Some of these factors are less learnable than others, so consequently it may not be possible to become good at all of these factors through sheer intellect or effort given limited time. Multiplicative factors are also relevant in the selection of *groups* of people. For instance, in machine learning, selecting a team of IMO gold medalists may not be as valuable as a team that includes people with other backgrounds and skill sets. The skillsets of some backgrounds have skill sets which may cover gaps in skill sets of people from other backgrounds. **Preferential Attachment** In our second post, we addressed the [Matthew Effect](https://www.alignmentforum.org/posts/AtfQFj8umeyBBkkxa/a-bird-s-eye-view-of-the-ml-field-pragmatic-ai-safety-2#The_Matthew_Effect): *to those who have, more will be given.*This is related to a more general phenomenon called preferential attachment. There are many examples of this phenomenon: the rich get richer, industries experience agglomeration economies, and network effects make it hard to opt out of certain internet services. See a short video demonstrating this process [here](https://en.wikipedia.org/wiki/File:Chinese_Restaurant_Process_for_DP(0.5,H).webm). The implication of preferential attachment and the Matthew Effect is that researchers need to be acutely aware that it helps a lot to do very well early in their careers if they want to succeed later. Long tail outcomes can be heavily influenced by timing. **Edge of Chaos** The “edge of chaos” is a heuristic for problem selection that can help to locate projects that might lead to long tails. The edge of chaos is used to refer to the space between a more ordered area and a chaotic area. Operating at the edge of chaos means wrangling a chaotic area and transforming a piece of it into something ordered, and this can produce very high returns. There are many examples of the edge of chaos as a general phenomenon. In human learning, the [zone of proximal development](https://en.wikipedia.org/wiki/Zone_of_proximal_development) represents a level of difficulty (e.g. in school assignments) that is not so hard as to be incomprehensible, but not so easy as to require little thought. When building cellular automata, you need to take care to ensure the simulation is not so chaotic as to be incomprehensible but not so ordered as to be completely static. There’s a narrow sweet spot where emergent, qualitatively distinct outcomes are possible. This is the area where it is possible for individuals to be a creative, highly impactful force. In the context of safety research, staying on the edge of chaos means avoiding total chaos and total order. In areas with total chaos, there may be no tractability, and solutions are almost impossible to come by. This includes much of the work on “futuristic” risks: exactly which systems the risks will arise from is unclear, leading to a constant feeling of being unable to grasp the main problems. In the previous post, we argued that futuristic thinking is useful to begin to define problems, but for progress to be made, some degree of order must be made out of this chaos. However, in areas with total order, there is unlikely to be much movement since the low-hanging fruit has already been plucked. Designing metrics is a good example of something that is on the edge of chaos. Before a metric is devised, it is difficult to make progress in an area or even know if progress has been made. After the development of a metric, the area becomes much more ordered and progress can be more easily made. This kind of conversion allows for a great deal of steering of resources towards an area (whatever area the new metric emphasizes) and allows for tail impact. Another way to more easily access the edge of chaos is to keep a list of projects and ideas that don’t work now, but might work later, for instance, after a change in the research field or an increase in capabilities. Periodically checking this list to see if any of the conditions are now met can be useful, since these areas are most likely to be near the edge of chaos. In venture capital, a general heuristic is to “[figure out what can emerge now that couldn’t before](https://twitter.com/sama/status/1214274050651934721).” One useful edge of chaos heuristic is to only do one or two non-standard things in any given project. If a project deviates too much from existing norms, it may not be understood; but if it is too similar, it will not be original. At the same time, heavily imitating previous successes or what made a person previously successful leads to repetition, and risks not generating new value. The following questions are also useful for determining if an area is on the edge of chaos: Have there been substantial developments in the area in the past year? Has thinking or characterization of the problem changed at all recently? Is it not obvious which method changes will succeed and which will fail? Is there a new paradigm or coherent area that has not been explored much yet (contrast with pre-paradigmatic areas that have been highly confused for a long time, which are more likely to be highly chaotic than at the edge of chaos)? Has anyone gotten close to making something work, but not quite succeeded? We will now discuss specific high-leverage points for influencing AI safety. We note that they can be analogized to many of the processes discussed above. ### Managing Moments of Peril *My intuition is that if we minimize the number of precarious situations, we can get by with virtually any set of technologies.*  —[Tyler Cowen](https://soundcloud.com/sam-altman-543613753/tyleropenai) It is not necessary to believe this statement to believe the underlying implication: moments of peril are likely to precipitate the most existentially-risky situations. In common risk analysis frameworks, catastrophes arise not primarily from failures of components, but from the system overall moving into unsafe conditions. When tensions are running high or progress is moving extremely quickly, actors may be more willing to take more risks. In cases like this, people will also be more likely to apply AI towards explicitly dangerous aims such as building weapons. In addition, in an adversarial environment, incentives to build power-seeking AI agents may be even higher than usual. As [Ord writes](https://www.google.com/books/edition/The_Precipice/3aSiDwAAQBAJ?hl=en&gbpv=1&bsq=recall%20that%20nuclear%20weapons): *Recall that nuclear weapons were developed during the Second World War, and their destructive power was amplified significantly during the Cold War, with the invention of the hydrogen bomb. History suggests that wars on such a scale prompt humanity to delve into the darkest corners of technology.* Better forecasting could help with either prevention or anticipation of moments of peril. Predictability of a situation is also likely to reduce the risk factor of humans making poor decisions in the heat of the moment. Other approaches to reducing the risk of international conflict are likely to help. Because of the risks of moments of peril, we should be ready for them. During periods of instability, systems are more likely to rapidly change, which could be extremely dangerous, but perhaps also useful if we can survive it. Suppose a crisis causes the world to “wake up” to the dangers of AI. As [Milton Friedman remarked](https://www.goodreads.com/quotes/110844-only-a-crisis---actual-or-perceived---produces-real): “Only a crisis – actual or perceived – produces real change. When that crisis occurs, the actions that are taken depend on the ideas that are lying around.” A salient example can be seen with the COVID-19 pandemic and mRNA vaccines. We should make sure that the safety ideas lying around are as simple and time-tested as possible when a crisis will inevitably happen. ### Getting in early Building in safety early is very useful. In a report for the Department of Defense, [Frola and Miller](https://apps.dtic.mil/sti/citations/ADA141492) observe that approximately 75% of the most critical decisions that determine a system’s safety occur [early in development](https://mitpress.mit.edu/books/engineering-safer-world). The Internet was initially designed as an academic tool with [neither safety nor security in mind](https://www.researchgate.net/publication/283863741_A_history_of_internet_security). Decades of security patches later, security measures are still incomplete and increasingly complex. A similar reason for starting safety work now is that relying on experts to test safety solutions is not enough—solutions must also be time-tested. The test of time is needed even in the most rigorous of disciplines. A century before the four color theorem was proved, Kempe’s peer-reviewed proof went unchallenged for years until, finally, [a flaw was uncovered](https://academic.oup.com/plms/article-abstract/s2-51/1/161/1484405). Beginning the research process early allows for more prudent design and more rigorous testing. Since nothing can be done [both hastily and prudently](https://www.google.com/books/edition/The_Moral_Sayings_of_Publius_Syrus_a_Rom/_QQSAAAAIAAJ?hl=en), postponing machine learning safety research increases the likelihood of accidents. (This paragraph is based on a paragraph from Unsolved Problems in ML Safety.) As Ord [writes](https://www.google.com/books/edition/The_Precipice/3aSiDwAAQBAJ?hl=en&gbpv=1&dq=%22early+action+is+best+for+tasks+that+require+a+large+number+of+successive+stages%22&pg=PT181&printsec=frontcover), “early action is best for tasks that require a large number of successive stages.” Research problems, including ML problems, contain many successive stages. AI safety has and will also require a large number of successive stages to be successful: detecting that there’s a problem, clarifying the problem, measuring the problem, creating initial solutions, testing and refining those solutions, adjusting the formulation of the problem, etc. This is why we cannot wait until AGI to start to address problems in real ML systems. Another reason for getting in early is that things compound: research will influence other research, which in turn influences other research, which can help self-reinforcing processes produce outsized effects. Historically, this has been almost all progress in deep learning. Such self-reinforcing processes can also be seen as an instance of preferential attachment. Stable trends (e.g. scaling laws) lead people to question whether work on a problem will make any difference. For example, benchmark trends are *sometimes* stable (see the previous post for progress across time). However, it is precisely because of continuous research effort that new directions for continuing trends are discovered (cf. Moore's law). Additionally, starting/accelerating the trend for a safety metric earlier rather than later would produce clear counterfactual impact. ### Scaling laws Many different capabilities have scaling laws, and the same is true for some safety metrics. One objective of AI safety research should be to improve scaling laws of safety relative to capabilities. For new problems or new approaches, naive scaling is often not the best way to improve performance. In these early stages, researchers with ideas are crucial drivers, and ideas can help to change both the slope and intercept of scaling laws. To take an example from ML, consider the application of Transformers to vision. [iGPT](https://openai.com/blog/image-gpt/) was far too compute-intensive, and researchers spent over a year making it more computationally efficient. This didn’t stand the test of time. Shortly thereafter, Google Brain, which is more ideas-oriented, introduced the “[patchify](https://arxiv.org/abs/2010.11929)” idea, which made Transformers for vision computationally feasible and resulted in better performance. The efficiency for vision Transformers has been far better than for iGPT, allowing further scaling progress to be made since then. To take another example, that of AlphaGo, the main performance gains didn’t come from increasing compute. Ideas helped drive it forward (from [Wikipedia](https://en.wikipedia.org/wiki/AlphaGo)): ![](https://lh3.googleusercontent.com/mgmkAvR5XdOWLc8l_GskIbkE8G522UliEEv8W-EbgQw1wzWXrx8ilxCya-l_xzrSFnJU9DzOqIU8RV7ETijPzFAXkPaeo-j4Egh0rPZZ00aO6LRb6I8SPi6gH5QRBgUeuBu1WzoCN_pMi3svpA) One can improve scaling laws by improving their slope or intercept. It’s not easy to change the slope or intercept, but investing in multiple people who could potentially produce such breakthroughs has been useful. In addition, for safety metrics, we need to move as far along the scaling law as possible, which requires researchers and sustained effort. It is usually necessary to apply exponential effort to continue to make progress in scaling laws, which requires continually increasing resources. As ever, social factors and willingness of executives to spend on safety will be critical in the long term. This is why we must prioritize the social aspects of safety, not just the technical aspects. Scaling laws can be influenced by ideas. Ideas can change the slope (e.g., the type of supervision) and intercept (e.g., numerous architectural changes). Ideas can change the data resources: the speed of creating examples (e.g., [saliency maps for creating adversarial examples](https://aclanthology.org/Q19-1029/)), cleverly repurposing data from the Internet (e.g., using an existing subreddit to collect task-specific data), recognizing sources of superhuman supervision (such as those from a collective intelligence, such as a paper recommender based on multiple peoples’ choices). Ideas can change the compute resources, for example through software-level and hardware-level optimizations improvements. Ideas can define new tasks and identify which scaling laws are valuable to improve. ### Don’t let the perfect be the enemy of the good Advanced AI systems will not be ideal in all respects. Nothing is perfect. Likewise, high-risk technologies will be forced into conditions that are not their ideal operating conditions. Perfection in the real world is unattainable, and attempts to achieve perfection may not only fail, but they also might achieve less than attempts carefully aimed at reducing errors as much as possible. For example, not all nuclear power plants meltdown; this does not mean there are no errors in those plants. [*Normal Accidents*](http://sunnyday.mit.edu/papers/hro.pdf) looked at organizational causes of errors and notes that some “accidents are inevitable and are, in fact, normal.” Rather than completely eliminate all errors, the goal should be to minimize the impact of errors or prevent errors from escalating and carrying existential consequences. To do this, we will need fast feedback loops, prototyping, and experimentation. Due to emergence and unknown unknowns, risk in complex systems cannot be completely eliminated or managed in one fell swoop, but it can be progressively reduced. All else being equal, going from 99.9% safe to 99.99% safe is highly valuable. Across time, we can continually drive up these reliability rates, which will continually increase our expected civilizational lifespan. Sometimes it’s argued that any errors at all with a method will necessarily mean that x-risk has not really been reduced, because an optimizer will necessarily exploit the errors. While this is a valid concern, it should not be automatically assumed. The next section will explain why. Problems with asymptotic reasoning ---------------------------------- In some parts of the AI safety community, there is an implicit or explicit drive for asymptotic reasoning or thinking in the limit. “Why should we worry about improving [safety capability] now since performance of future systems will be high?” “If we let [variable] be infinite, then wouldn’t [safety problem] be completely solved?” “Won’t [proposed safety measure] completely fail since we can assume the adversary is infinitely powerful?” While this approach arises from some good intuitions and has useful properties, it should not always be taken to the extreme. ### Goodhart’s Law *Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.* —[Goodhart’s Law](https://www.google.com/books/edition/Inflation_Depression_and_Economic_Policy/OMe6UQxu1KcC?hl=en&gbpv=1&bsq=any%20observed%20statistical) (original phrasing, not the simplistic phrasing) Goodhart’s Law is an important phenomenon that is crucial to understand when conducting AI safety research. It is relevant to proxy gaming, benchmark design, and adversarial environments in general. However, it is sometimes misinterpreted, so we seek to explain our view of the importance of Goodhart’s Law and what it does and does not imply about AI safety. Goodhart’s law is sometimes used to argue that optimizing a single measure is doomed to create a catastrophe as the measure being optimized ceases to be a good measure. This is a far stronger formulation than originally stated. While we must absolutely be aware of the tendency of metrics to collapse, we should also avoid falling into the trap of thinking that *all objectives can never change and will always collapse in all circumstances*. Strong enough formulations are tantamount to claiming that there is no goal or way to direct a strong AI safely (implying our evitable doom). Goodhart’s Law does not prove this: instead, it shows that adaptive counteracting systems will be needed to prevent the collapse of what is being optimized. It also shows that metrics will not always include everything that we care about, which suggests we should try to include a variety of different possible goods in an AGI’s objective. Whether we like it or not, all objectives are wrong, but some are useful. **Counteracting forces** There are many examples of organizations optimizing metrics while simultaneously being reeled in by larger systems or other actors from the worst excesses. For instance, while large businesses sometimes employ unsavory practices in pursuit of profits, in many societies they do not hire hitmen to assassinate the leaders of competing companies. This is because another system (the government) understands that the maximization of profits can create negative incentives, and it actively intervenes to prevent the worst case outcomes with laws. To give another example, the design of the United States constitution was explicitly based on the idea that all actors would be personally ambitious. Checks and balances were devised to attempt to subdue the power of any one individual and promote the general welfare (as James Madison [wrote](https://billofrightsinstitute.org/primary-sources/federalist-no-51), “ambition must be made to counteract ambition”). While this system does not always work, it has successfully avoided vesting all power in the single most capable individual. Intelligence clearly makes a difference in the ability to enact counter forces to Goodhart’s Law. An extremely intelligent system will be able to subvert far more defenses than a less intelligent one, and we should not expect to be able to restrain a system far more intelligent than all others. This suggests instead that it is extremely important to avoid a situation where there is only a single agent with orders of magnitude more intelligence or power than all others: in other words, there should not be a large asymmetry in our offensive and defensive capabilities. It also suggests that the design of counteracting incentives of multiple systems will be critical. In order to claim that countervailing systems are not appropriate for combating Goodhart’s Law, one may need to claim that offensive capabilities must always be greater than defensive capabilities, or alternatively, that the offensive and defensive systems will necessarily collude. In general, we do not believe there is a decisive reason to expect offensive capabilities to be leagues better than defensive capabilities: the examples from human systems above show that offensive capabilities do not always completely overwhelm defensive capabilities (even when the systems are intelligent and powerful), in part due to increasingly better monitoring. We can’t take the offensive ability to the limit without taking the defensive ability to the limit. Collusion is a more serious concern, and must be dealt with when developing counteracting forces. In designing incentives and mechanisms for various countervailing AI systems, we must decrease the probability of collusion as much as possible, for instance, through AI honesty efforts.  Asymptotic reasoning recognizes that performance of future systems will be high, which is sometimes used to argue that work on counteracting systems is unnecessary in the long term. To see how this reasoning is overly simplistic, assume we have an offensive AI system, with its capabilities quantified with o.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} , and a protective defensive AI system, with its capabilities quantified p. It may be true that o and p are high, but we also need to care about factors such as p−o and the difference in derivatives dpdresources−dodresources. Some say that future systems will be highly capable, so we do not need to worry about improving their performance in any defensive dimension. Since the relative performance of systems matters and since the scaling laws for safety methods matter, asserting that all variables will be high enough not to worry about them is a low-resolution account of the long term. Some examples of counteracting systems include artificial consciences, AI watchdogs, lie detectors, filters for power-seeking actions, and separate reward models. **Rules vs Standards** *So, we’ve been trying to write tax law for 6,000 years. And yet, humans come up with loopholes and ways around the tax laws so that, for example, our multinational corporations are paying very little tax to most of the countries that they operate in. They find loopholes. And this is what, in the book, I call the loophole principle. It doesn’t matter how hard you try to put fences and rules around the behavior of the system. If it’s more intelligent than you are, it finds a way to do what it wants.* —[Stuart Russell](https://www.vox.com/future-perfect/2019/10/26/20932289/ai-stuart-russell-human-compatible) This is true because tax law is exclusively built on *rules*, which are clear, objective, and knowable beforehand. It is built on rules because the government needs to process hundreds of millions of tax returns per year, many tax returns are fairly simple, and people want to have predictability in their taxes. Because rule systems cannot possibly anticipate all loopholes, they are bound to be exploited by intelligent systems. Rules are fragile. The law has another class of requirements, called [standards](https://www.youtube.com/watch?v=vgpZ4Y4tEPk), which are designed to address these issues and others. Standards frequently include terms like “reasonable,” “intent,” and “good faith,” which we do not know how to assess in a mechanistic manner. We simply “know it when we see it:” in fact, a common legal term, *res ipsa loquitur*, means “the thing speaks for itself.” Unlike rule-based code, deep neural networks can model these types of fuzzier concepts. Unlike the tax code, which is based on rules and can be adjudicated by logic-based computer programs such as TurboTax, the criminal law is adjudicated by an intelligent system with intuitions (a judge and perhaps a jury). If a criminal is acquitted when they are guilty, it is because the intelligent system failed to collect enough evidence or interpret it correctly, not because the defense found a “loophole” in the definition of homicide (the exception is when lawyers make mistakes which create trouble under the *rules* used for procedure and evidence). Russell’s argument correctly concludes that rules alone cannot restrain an intelligent system. However, standards (e.g. “use common sense”, “be reasonable”) can restrain some intelligent behavior, provided the optimizing system is not too much more intelligent than the judiciary. This argument points to the need to have intelligent systems, rather than mechanistic rules, that are able to evaluate other intelligent systems. There are also defensive mechanisms that work for fuzzy raw data, [such as *provable* adversarial robustness](https://arxiv.org/abs/1902.02918), that can help strengthen the defense. It is correct to conclude that an AGI’s objectives should not be based around precise rules, but it does not follow that all objectives are similarly fragile. **Goal refinement** Goodhart’s Law applies to *proxies* for what we care about, rather than what we actually care about. Consider [ideal utilitarianism](https://oxford.universitypressscholarship.com/view/10.1093/acprof:oso/9780199577446.001.0001/acprof-9780199577446-chapter-4): does Goodhart’s Law show that “maximizing the good” will inevitably lead to ruin? Regardless of how one views ideal utilitarianism, it would be wrong to conclude that it is refuted by Goodhart’s Law, which warns that many *proxies* for good (e.g. “the number of humans who are smiling”) will tend to collapse when subjected to optimization pressure. Proxies that capture something we care about will likely have an approximation error. Some objectives have more approximation error than others: for instance, if we want to measure economic health, using real GDP reported by the US government will likely have less approximation error than nominal GDP reported in a text file on my computer. When subjected to optimization, that approximation error may become magnified, as optimizers can find areas where the approximation is particularly flawed and potentially manipulate it. This suggests that as optimization power increases, approximation error must correspondingly decrease, which can happen with better models, or approximation errors must become harder to exploit, which can happen with better detectors. As such, systems will need to have their goals continuously refined and improved. Methods for goal refinement might include better automated moral decision making and value clarification. We will discuss these in our next post. ### Limitations of research based on a hypothetical superintelligence Many research agendas start by assuming the existence of a superintelligence, and ask how to prove that it is completely safe. Rather than focus on microcosmic existing or soon-to-emerge systems, this line of research analyzes a model in the limit. This line of attack has limitations and should not be the only approach in the portfolio of safety research. For one, it encourages work in areas which are far less tractable. While mathematical guarantees of safety would be the ideal outcome, there is good reason to believe that in the context of engineering sciences like deep learning, they will be very hard to come by (see the previous posts in the sequence). In information security, practitioners do not look for airtight guarantees of security, but instead try to increase security iteratively as much as possible. Even RSA, the centerpiece of internet encryption, is not provably completely unbreakable (perhaps a superintelligence could find a way to efficiently factor large numbers). Implicitly, the requirement of a proof and only considering worst-case behavior relies on incorrect ideas about Goodhart’s Law: “if it is possible for something to be exploited, it certainly will be by a superintelligence.” As detailed above, this account is overly simplistic and assumes a fixed, rule-based, or unintelligent target. Second, the assumption of superintelligence eliminates an entire class of interventions which may be needed. It forces a lack of concretization, since it is not certain what kind of system will eventually be superintelligent. This means that feedback loops are extremely sparse, and it is difficult to tell whether any progress is being made. The approach often implicitly incentivizes retrofitting superintelligent systems with safety measures, rather than building safety into pre-superintelligent systems in earlier stages. From complex systems, we know that the crucial variables are often discovered by accident, and only empirical work is able to include the testing and tinkering needed to uncover those variables.  Third, this line of reasoning typically assumes that there will be a single superintelligent agent working directly against us humans. However, there may be multiple superintelligent agents that can rein in other rogue systems. In addition, there may be artificial agents that are above human level on only some dimensions (e.g., creating new chemical or biological weapons), but nonetheless, they could pose existential risks before a superintelligence is created. Finally, asymptotically-driven research often ignores the effect of technical research on sociotechnical systems. For example, it does very little to improve safety culture among the empirical researchers who will build strong AI, which is a significant opportunity cost. It also is less valuable in cases of (not necessarily existential) crisis, just when policymakers will be looking for workable and time-tested solutions. Assuming an omnipotent, omniscient superintelligence can be a useful exercise, but it should not be used as the basis for all research agendas. ### Instead, improve cost/benefit variables In science, problems are rarely solved in one fell swoop. Rather than asking, “does this solve every problem?” we should ask “does this make the current situation better?” Instead of trying to build a technical solution and then try to use it to cause a future AGI to swerve towards safety, we should begin steering towards safety now. The military and information assurance communities, which are used to dealing with highly adversarial environments, do not search for solutions that render all failures an impossibility. Instead, they often take a cost-benefit analysis approach by aiming to increase the cost of the most pressing types of adversarial behavior. Consequently, a cost-benefit approach is a time-tested way to address powerful intelligent adversaries. Even though no single factor completely guarantees safety, we can drive down risk through a combination of many safety features (defense in depth). Better adversarial robustness, ethical understanding, safety culture, anomaly detection, and so on to collectively make exploitation by adversaries harder, driving up costs. In practice, the balance between the costs and benefits of adversarial behavior needs to be tilted in favor of the costs. While it would be nice to have the cost of adversarial behavior be infinite, in practice this is likely infeasible. Fortunately, we just need it to be sufficiently large. In addition to driving up the cost of adversarial behavior, we should of course drive down the cost of safety features (an important high-level contributing factor). This means making safety features useful in more settings, easier to implement, more reliable, less computationally expensive, or have less steep or no tradeoffs with capabilities. Even if an improvement does not completely solve a safety problem once and for all, we should still aim to continue increasing the benefits. In this way, safety becomes something we can continuously improve, rather than an all-or-nothing binary property. Some note we “only have one chance to get safety right,” so safety is binary. Of course, there are no do-overs if we’re extinct, so whether or not humans are extinct is indeed binary. However, we believe that the probability of extinction due to an event or deployment is not zero or one, but rather a continuous real value that we can reduce by cautiously changing the costs and benefits of hazardous behavior and safety measures, respectively. The goal should be to reduce risk as much as possible over time. It’s important to note that not all research areas, including those with clear benefits, will have benefits worth their costs. We will discuss one especially important cost to be mindful of: hastening capabilities and the onset of x-risk. Safety/capabilities tradeoffs ----------------------------- Safety and capabilities are linked and can be difficult to disentangle. A more capable system might be more able to understand what humans believe is harmful; it might also have more ability to cause harm. Intelligence cuts both ways. We do understand, however, that desirable behavior *can* be decoupled from intelligence. For example, it is well-known that *moral virtues* are distinct from *intellectual virtues*. An agent that is knowledgeable, inquisitive, quick-witted, and rigorous is not necessarily honest, just, power-averse, or kind. In this section, by *capabilities* we mean *general capabilities.*These include general prediction, classification, state estimation, efficiency, scalability, generation, data compression, executing clear instructions, helpfulness, informativeness, reasoning, planning, researching, optimization, (self-)supervised learning, sequential decision making, recursive self-improvement, open-ended goals, models accessing the Internet, or similar capabilities. We are not speaking of more specialized capabilities for downstream applications (for instance, climate modeling). It is not wise to decrease some risks (e.g. improving a safety metric) by increasing other risks through advancing capabilities. In some cases, optimizing safety metrics might increase capabilities even if they aren’t being aimed for, so there needs to be a more principled way to analyze risk. We must ensure that growing the safety field does not simply hasten the arrival of superintelligence.   ![](https://lh4.googleusercontent.com/B5un8d_inmwasIb794AVpCigY-JKG58d7XAnzlbK1u-ZE3fbSCCHHFN66y9avJx-u-9uL7nBZ20VvtssNbrip57UJlYl6dM6n0mX8n61bv6zIfJ3yg3HsyF7JZPdQ5fuFhl80BH6rPF1UEDu_g) The figure above shows the performance of various methods on standard ImageNet as well as their anomaly detection performance. The overall trendline shows that anomaly detection performance tends to improve along with more general ImageNet performance, suggesting that one way to make “safety progress” is simply to move along the trendline (see the red dot). However, if we want to make [differentialprogress](https://www.nickbostrom.com/existential/risks.html) towards safety specifically, we should instead focus on safety methods that do not simply move along the existing trend (see the green dot).  In addition, the trendline also suggests that differential safety progress is in fact *necessary* to attain maximal anomaly detection performance, since even 100% accuracy would only lead to ~88% AUROC. Consequently researchers will need to shift the line up, not just move along the trendline. This isn’t the whole picture. There may be other relevant axes, such as the ease of a method’s implementation, its computational cost, its extensibility, and its data requirements. However, the leading question should be to ask what the effect of a safety intervention is on general capabilities. It’s worth noting that safety is commercially valuable: systems viewed as safe are more likely to be deployed. As a result, even improving safety without improving capabilities could hasten the onset of x-risks. However, this is a very small effect compared with the effect of directly working on capabilities. In addition, hypersensitivity to any onset of x-risk proves too much. One could claim that any discussion of x-risk at all draws more attention to AI, which could hasten AI investment and the onset of x-risks. While this may be true, it is not a good reason to give up on safety or keep it known to only a select few. We should be precautious but not self-defeating. ### Examples of capabilities goals with safety externalities [Self-supervised learning](https://arxiv.org/abs/1906.12340) and [pretraining](https://arxiv.org/abs/2004.06100) have been shown to improve various uncertainty and robustness metrics. However, the techniques were developed primarily for the purpose of advancing general capabilities. This shows that it is not necessary to be aiming for safety to improve it, and certain upstream capabilities improvements can simply improve safety “accidentally.” Improving world understanding helps models better anticipate consequences of their actions. It thus makes it less likely that they will produce unforeseen consequences or take irreversible actions. However, it also increases their power to influence the world, potentially increasing their ability to produce undesirable consequences. Note that in some cases, even if research is done with a safety goal, it might be indistinguishable from research done with a capabilities goal if it simply moves along the existing trendlines. ### Examples of safety goals with capabilities externalities Encouraging models to be truthful, when defined as not asserting a lie, may be desired to ensure that models do not willfully mislead their users. However, this may increase capabilities, since it encourages models to have better understanding of the world. In fact, maximally truth-seeking models would be more than fact-checking bots; they would be general research bots, which would likely be used for capabilities research. Truthfulness roughly combines three different goals: accuracy (having correct beliefs about the world), calibration (reporting beliefs with appropriate confidence levels), and honesty (reporting beliefs as they are internally represented). Calibration and honesty are safety goals, while accuracy is clearly a capability goal. This example demonstrates that in some cases, less pure safety goals such as truth can be decomposed into goals that are more safety-relevant and those that are more capabilities-relevant. One safety goal could be to incentivize collaboration, rather than competition, between different AI systems. This might be useful in reducing high-stakes conflicts that could lead to catastrophic outcomes. However, depending on how it is researched, it may come with capabilities externalities. For instance, focusing on getting agents to perform better in positive-sum games might have a significant effect on general planning ability, which could have further downstream effects. Better modeling “human preferences” may also be an example of a safety goal with capabilities externalities; we will cover this below. ### Practical steps When attempting to measure progress towards safety, it’s essential to also measure a method’s contribution to capabilities. One should ask whether a method creates a differential improvement in safety. Rather than relying on intuition to ascertain this, it is necessary to make empirical measurements. Empirical research claiming to differentially improve safety should demonstrate a differential safety improvement empirically. Of course, *reducing*capabilities is not likely to be helpful in practice, as this could make the method less likely to be used in the real-world. Sometimes it is claimed that more general capabilities are needed to produce safety work, and so working on general capabilities advancements will at some point eventually allow working on safety. We agree that it is not necessarily the case that it could *never* be worth making capabilities advancements in exchange for differential improvements in safety. If at some point in the future it is impossible to make safety progress without an increase in capabilities, there may be more reason to accept capabilities externalities. However, working on general capabilities for years to start studying a particular safety problem is neither precautious nor necessary. There are fortunately many safety research areas where it’s possible to make contributions without contributing to general capabilities at all. For instance, almost every paper in adversarial robustness hasn’t improved accuracy, because the two are not positively correlated. Similarly, out-of-distribution detection usually doesn’t come with capability externalities, and often focuses on eliciting out-of-distribution detection information from fixed models rather than improving their representations. We will discuss these and other areas and describe their relation to general capabilities in the next post. An Application: Machine Ethics vs. Learning Task Preferences ------------------------------------------------------------ Preference learning is typically operationalized as learning human preferences over different ways to accomplish a task. This is intended to ensure that agents understand what humans mean, rather than simply what they say. However, modeling “human values” or “human preferences” is often just modeling “user comparisons” or “task preferences,” not unlike the preference or comparison annotations that companies have been collecting for ML-driven translation, advertisement, and search algorithms throughout the past years. First, humans prefer smarter models. This is especially true when humans rate the usefulness of models. As such, modeling task preferences often does not pass the capability externalities test because it includes information about preferences for task-specific behavior (e.g. the quality of a summary). Second, preferences can be inconsistent, ill-conceived, and highly situation-dependent, so they may not be generalizable to the unfamiliar world that will likely arise after the advent of highly-capable models. Consequently, we recommend trying to make models act in keeping with human values, not model preferences for a broad suite of general tasks. One area trying to do this is [machine ethics](https://plato.stanford.edu/entries/ethics-ai/#MachEthi), which is about building ethical AIs. (This is in contrast to AI ethics, which is about “ethics of AI” and is dominated by discussions of fairness, bias, and inequality; by way of its constituent’s Foucualtian presuppositions, it often implicitly [adopts anti-normative positions](https://perso.uclouvain.be/mylene.botbol/Recherche/GenreBioethique/Nussbaum_NRO.htm).) Rather than model task preferences, a core aim of machine ethics is modeling actual human values. Compared with task preferences, ethical theories and human values such as intrinsic goods may be more generalizable, interpretable, and neglected. They are also more important to us (compared to preferences for high-quality summarization, for instance), and are also plausibly timeless. In addition, many normative factors are common to a number of ethical theories, even if theories disagree about how to combine them. Coarsely, normative factors are intrinsic goods, general constraints, special obligations, and options. An expansion of this list could be wellbeing, knowledge, the exercise of reason, autonomy, friendship, equality, culpability, impartiality, desert, deontological thresholds, intending harm, lying, promises, special obligations, conventions, duties to oneself, options, and so on. Note that these include factors that cover fairness, but also a whole spectrum of additional important factors. In general, research into the application of ethical theories and the approximation of normative factors appears far less likely to lead to capabilities externalities, because the scope of what is being learned is restricted dramatically. Ethical theories contain less information that is relevant to understanding how to perform general tasks than generic human annotations and comparisons. Still, it’s important to anticipate potential capabilities externalities: for example, one should not try to model consequentialist ethics by building better general predictive world models, as this is likely to create capabilities externalities. One possible goal of machine ethics is work towards a [moral parliament](https://www.fhi.ox.ac.uk/wp-content/uploads/2021/06/Parliamentary-Approach-to-Moral-Uncertainty.pdf), a framework for making ethical decisions under moral and empirical uncertainty. Agents could submit their decisions to the internal moral parliament, which would incorporate the ethical beliefs of multiple stakeholders in informing decisions about which actions should be taken. Using a moral parliament could reduce the probability that we are leaving out important normative factors by focusing on only one moral theory, and the inherent multifaceted, redundant, ensembling nature of a moral parliament would also contribute to making the model less gameable. If a component of the moral parliament is uncertain about a judgment, it could request help from human stakeholders. The moral parliament might also be able to act more quickly to restrain rogue agents than a human could and act in the fast-moving world that is likely to be induced by more capable AI. We don’t believe the moral parliament would solve all problems, and more philosophical and technical work will be needed to make it work, but it is a useful goal for the next few years. Sometimes it is assumed that a sufficiently intelligent system will simply understand ethics, so there is no need to work on machine ethics. This analysis succumbs to the problems with asymptotic reasoning and assuming omniscience detailed above. In particular, we should not assume that an ethics model can automatically withstand the optimization pressure of another superintelligence, or that it will generalize in the same way as humans under distributional shift. We need to ensure that we will have aligned, reliable, and robust ethical understanding. A proactive ethics strategy is far more likely to succeed than one that naively hopes that the problem can be ignored or taken care of at the last moment. Additionally, on the sociotechnical front, people need time-tested examples if they are to be adopted or required in regulation. A moral parliament will take years to engineer and accrue buy-in, so we cannot trust that our values will be best furthered by a last-minute few-shot moral parliament. Conclusion ---------- Starting research with asymptotic reasoning, while it has the benefit of aiming for research that has immediately graspable AI x-risk relevance, carries the cost of making research less specific and less tractable. It also reduces the number of research feedback loops. By focusing on microcosms, empirical research is relevant for reducing AI x-risk, but its relevance is less immediately graspable. However, the reduction in immediately graspable relevance is more than made up for by increased tractability, specificity, measurability, and the information gained from faster feedback loops. Despite these strengths, naive empirical research threatens to produce capabilities externalities, which should be avoided as much as possible. We propose a strategy to produce tractable tail impacts with minimal capabilities externalities. In summary: * Pursue tail impacts, reduce moments of peril, start working on safety early, and improve the scaling laws of safety in comparison to capabilities. * Since impact is likely to be tail distributed, it’s important to understand where tail outcomes emerge from: multiplicative processes, preferential attachment, and the edge of chaos. * “How can this safety mechanism make strong AI completely safe?” excludes many useful risk reduction strategies. Works that stand up the question “how can this work steer the AI development process in a safer direction?” are also useful for AI x-risk reduction. * It’s useful to view safety as a continuously improvable property rather than an all-or-nothing binary property. * We take a stand against capabilities externalities in some safety research directions. AI safety research should be safe. * Machine ethics should be preferred to learning task preferences, because the latter can have significant capability externalities, and ethics contains more time-tested and reliable values than task-specific preferences do. * We suggest trying to achieve safety through evolution, rather than only trying to arrive at safety through intelligent design.
5e3690eb-dcf5-40ce-9511-d7dfb680e897
StampyAI/alignment-research-dataset/arxiv
Arxiv
Deep Learning Application in Security and Privacy -- Theory and Practice: A Position Paper 1 Introduction --------------- Computing technology is becoming an integral part of our lives and has many facets ranging from supercomputing (used in weather prediction, cutting-edge research and business automation) to embedded devices (like smartphones, electronic devices in a home and intelligent transport systems). Among many, security and privacy are considered to be two distinct and unique challenges. In the security and privacy domain, any protection system has to match a constantly evolving adversarial actor. According to the Symantec cybercrime report [[1](#bib.bib1)], the overall number of vulnerabilities has increased by 13% in 2018. Similarly, according to Cybersecurity Ventures [[2](#bib.bib2)], zero-day exploits seen in the wild will grow from one per week (in 2015) to one per day by 2021. It is practically impossible for a human to keep pace with the sheer number of cybersecurity events (and related activities) on a daily basis on top of an already daunting threat landscape [[3](#bib.bib3)]. In this paper, and as a matter of fact in any context, security and privacy are relative terms. It is not discussed as an absolute state, but rather as a state with potential and/ or accepted risks. The global cost of data breaches has increased by 6.4% [[4](#bib.bib4)] and has the potential to severely damage an organisation’s bottom-line, and that is without taking the potential penalties imposed by the General Data Protection Regulation (GDPR) into account [[5](#bib.bib5)]. As per the GDPR, an organisation can be fined up to €10 million or two percent of the firm’s global turnover for a small offence (whichever is greater). For a serious offence, an organisation can be fined up to €20 million or four percent of a firm’s global turnover (whichever is greater) [[5](#bib.bib5)]. Furthermore, there is a crisis of skilled cybersecurity practitioners. According to study [[6](#bib.bib6)], the cybersecurity job market will grow by approximately 6 million USD globally by 2019 – with potential shortages of trained professionals up to 25%. Automation of decisions and actions based on network and system generated alerts has the potential to help overcome the challenges related to security and privacy – both in a technological and a business-operations (e.g. labour shortages) dimension. Artificial Intelligence (AI) is seen as a potential solution towards the cybersecurity automation challenge in some academic and industrial circles. Machine Learning (ML) has been successfully deployed in a number of domains including but not limited to: image classification [[7](#bib.bib7)], objective detection and recognition [[8](#bib.bib8)], language translation, and voice synthesis [[9](#bib.bib9)]. Deep Learning (DL), a type of Machine Learning (ML) method, in most cases does not require prior expert knowledge for its learning (an obvious exception is Neuro-Fuzzy techniques). Therefore, it needs less manually engineered feature extraction and specialist knowledge [[10](#bib.bib10)]. DL can detect patterns in the raw data with potentially higher and more abstract level representations - a function that is very interesting for cybersecurity zero-day vulnerability/ exploit detection. Similarly, DL is used to abstract malware’s behavioural features and anomalous activity and can then be used to detect its existence in a system [[11](#bib.bib11), [12](#bib.bib12)]. AI as a cybersecurity tool is expected to capture a large market and it is clear that AI has the potential to impact the cybersecurity space [[13](#bib.bib13)]. Furthermore, there is sufficient market interest in both commercial (financial incentives) and academic research. It is understood that there is a potential to mislead an ML/ DL deployment as discussed in existing literature [[14](#bib.bib14), [15](#bib.bib15)], which is not the focus of this paper. In this paper, we discuss the challenges of deploying AI-based techniques (ML/ DL) to security domains. The discussion highlights the difference between the theory and practice of applying DL methods as a general security tool. The discussed challenges come from the technical development and exploration of DL methods in the context of cybersecurity – showcasing the fact DL techniques in themselves are not the panacea but mearly a tool that requires a number of correct (and in some cases trustworthy) features to be effective. The robustness of DL is stated in [[14](#bib.bib14)] as inversely proportional to the potential of an attacker’s ability to find adversarial examples, which can impact the accurate classification and detection of a threat. However, in this paper, we argue that robustness, no doubt an important feature, is not just dependent on the attacker’s ability to find adversarial examples. It also depends on an interdependent relationship of input data, its accuracy and trustworthiness, potential for adversarial examples, feature richness (needed for accurate classification and detection), and the data representing all possible case scenarios. We will discuss these features in further detail throughout this paper. Furthermore, this paper examines the ML/ DL not only from theoretical and feature/ability specific limitation but also from practical challenges related to implementation and deployment. Existing papers either focus on how successful ML/ DL were in their deployment or specification implementation challenges they faced, but discussion on the challenges related to ML/ DL deployed as security and privacy mechanism are not collectively discusses. ### 1.1 Structure of the Paper Section [2](#S2 "2 Security and Privacy by Deep Learning ‣ Deep Learning Application in Security and Privacy – Theory and Practice: A Position Paper") elaborates on the existing academic work that has shown the promise of ML/ DL as an automation tool for security and privacy practices. In section [3](#S3 "3 Deep Learning - A Deeper Look at its Application ‣ Deep Learning Application in Security and Privacy – Theory and Practice: A Position Paper"), we dive into the technical discussion of DL and how automation based on it is designed and developed. The discussion is derived from first impressions, based on the authors’ practical experience coming from a security background. Section [4](#S4 "4 Practical Considerations of Deep Learning Deployment ‣ Deep Learning Application in Security and Privacy – Theory and Practice: A Position Paper") articulates the practical considerations that a security practitioner has to take into account when working on DL deployment. Section [5](#S5 "5 Research Challenges for Deep Learning ‣ Deep Learning Application in Security and Privacy – Theory and Practice: A Position Paper") is a list of DL features that would make the technology a useful security tool for cybersecurity practitioners. 2 Security and Privacy by Deep Learning ---------------------------------------- In this section, we survey the types of security and privacy services and applications in which DL is deployed successfully – as represented by academic literature. ### 2.1 Deep Learning for Security and Privacy The set of security and privacy services that are being explored in academic literature to be the target deployment scenarios for DL are: 1. 1. Malware Detection: Efficient pattern recognition in large datasets is what ML/ DL is purpose built for. A number of proposals are put forward in academic literature to identify malware with high accuracy [[16](#bib.bib16), [17](#bib.bib17)]. In most of these proposals, pattern recognition is based on a particular behaviour (communication, syscall and resource usage/ utilisation patterns, etc). For an adversarial entity, the objective is to hide or exhibit its behaviour within the scope of genuine applications to avoid detection. 2. 2. Anomaly Detection or Network Intrusion Detection: Both the anomaly and the network intrusion detection rely on network traffic analysis. Based on this analysis, ML/ DL techniques try to find usage and communication patterns that represent an abnormal behaviour. It is important to keep in mind that anomalous behaviour is not necessarily a set of activities that are not allowed by system policies (security/ privacy). It is just an out-of-the-ordinary activity that can be genuine or malicious. For example, user A has access to client records. Usually, user A only accesses one record a day, but today user A accesses the entire list of clients. If the access control policy only focused on access (may user A access client records?) and not on frequency (how many client records user A can access?), accessing all client records would be a permitted action and not suspicious. However, this action might be anomalous. Such classification and detection of out-of-pattern usages nicely fits within the current capabilities of ML/ DL technology [[18](#bib.bib18), [19](#bib.bib19), [20](#bib.bib20)]. 3. 3. Distributed Denial of Service (DDoS) Detection: DDoS can be viewed as an anomalous request to access a particular resource. Therefore, based on the access patterns to a particular resource (a website or an application), ML/ DL can efficiently identify out-of-pattern access requests [[21](#bib.bib21), [22](#bib.bib22)]. From the above list, we can ascertain that DL is not widely used for privacy-preservation techniques. There is a potential for exposing data on user access patterns based on the user connection graph, especially in the context of data flow analysis. These domains might have unique patterns that can be useful for an effective DL deployment but an academic literature search for applications of DL in these fields did not yield substantial results. Below, we explain some of the identified privacy related services that might be suitable for DL deployment but limited work has been carried out in academic literature: 1. 1. Data Flow Analysis: The flow of data between any two entities can reveal data consumption in an organisation. For example, the flow of data between the consumer database and marketing teams can represent potential value for consumer profiling, targeted marketing and campaign analysis. The data flow and usage in a specific enterprise have a set pattern, even when looking at individual features such as ‘data flow’ and the actual ‘contents of the data’. Therefore, ML/ DL can be used to identify anomalous usage of data based on the data flows. Anomalous data flow patterns are used by ML/ DL deployed mostly for Intrusion Detection System (IDS) or Intrusion Detection Prevention (IDP) but not as a privacy preservation function. 2. 2. Data Exposure Potential: Whether in an enterprise environment or in personal settings, individuals have a circle of other individuals with whom they communicate. A community map for each individual can be constructed based on these communication patterns which can represent not only ‘with whom’ individuals share information but also ‘what information’ is being shared with their community. For example, an individual shares one type of information with only a subset of the individuals in his/ her community. This is easily classifiable and based on the patterns, ML/ DL can predict whether information accessible to an individual at a particular point in time has a high probability of being shared with certain other individuals. This analysis can be used to build a data exposure prediction which can be a useful tool for privacy-preservation and assessment. Furthermore, in the event of an information leakage, an analysis of the data flows and the probability of data exposure can be incorporated into the forensic investigation to quickly find any potential points (individuals) that could have leaked the information. The potential of ML/ DL has not been explored in the context of data exposure in current academic literature. We believe that the application of ML/ DL for such analysis shows a lot of promise. Most of the existing literature about privacy and DL is focuses on how to design DL methods in a manner that does not violate the users’ privacy [[23](#bib.bib23), [24](#bib.bib24), [25](#bib.bib25)]. Another application of DL in privacy is to build recommendation systems for users. For example, Yu et. al. [[26](#bib.bib26)] put forward a privacy setting recommendation system (iPhoto) for photo sharing based on image analysis. Most dimensions related to DL and privacy are beyond the scope of the this paper. The scope of the paper is how DL itself can be used as a privacy-protection mechanism. 3 Deep Learning - A Deeper Look at its Application --------------------------------------------------- In this section, we explore the technical aspects of understanding and deploying DL. The discussion revolves around the pre-requisites for DL deployment, the tools that can be used, and DL optimisation. Readers are referred to consult the survey by Zubair et. al. [[27](#bib.bib27)] for an in-depth analysis of DL structures and methodologies. ### 3.1 Representation Learning DL uses representation learning algorithms to automatically identify complex hidden structures in large datasets [[10](#bib.bib10)]. Relations between parameters can be more or less hidden depending on the features present in the data. Representation learning works to solve this problem by transforming raw data into a more useful representation for detection and classification predictors by highlighting the important dependencies [[28](#bib.bib28)]. The challenge is to generalise as much as possible while also preserving most of the information in the original dataset. DL implements the learning technique in the form of a model, a concatenation of multiple, relatively simple layers that each perform a transformation on the data [[28](#bib.bib28)]. The layers’ input is either raw data (input layer) or the previous layer’s learned representation of its input (hidden and output layers). This leads to automatically identified, hierarchical levels of abstraction, also called feature extraction, with higher level features defined as a composition of lower-level features [[29](#bib.bib29), [30](#bib.bib30)]. During the training phase, the model adjusts the internal parameters used to transform the data to achieve a more useful result [[10](#bib.bib10)]. ### 3.2 Data Normalisation DL models rely heavily on data as it is the basis of the pre-training and training phases, which in turn underlie the specialisation of a model to a task. DL does not need a perfectly curated dataset due to its learning scheme. Semi-supervised techniques have been shown to alleviate problems, however, a new training strategy and a better cost function could make training on incomplete and noisy data sets more efficient [[31](#bib.bib31)]. Whitening data is a known way of speeding up training convergence, readers are referred to [[32](#bib.bib32)] for details on how to transform the input data. Ioffe and Szegedy [[33](#bib.bib33)] describes batch normalisation, where normalisation is embedded in the model architecture as another method to reduce training-times. It works towards fixing the distribution of the layer’s inputs and thereby solves the problems introduced by internal covariate shift. Internal covariate shift describes the fact that the layers’ input distribution continuously changes during training due to the internal parameters updating [[33](#bib.bib33)]. The difficulty in changing the dataset in any way is to preserve as much of the original information as possible. This can be achieved by normalising the training examples relative to the entire training data [[33](#bib.bib33)]. Other, less efficient ways of combating covariate shift include lowering the training rate and careful parameter initialisation. Using DL in combination with Big Data is a popular concept in the industry, however, there are many challenges that need to be overcome. The three V’s model identifies them as volume, variety, and velocity. Chen and Lin [[31](#bib.bib31)] provides the authors’ thoughts on how to solve these problems. According to the authors, the large volume of Big Data (number of inputs, number of represented classes and high dimensionality of the entries) cannot be accommodated by a single machine due to its limited memory and computing capacity. A distributed framework would be more suited to the task. DL has been successfully utilised for the integration of heterogeneous data, e.g. [[34](#bib.bib34)] and [[35](#bib.bib35)]. Therefore, the authors believe that DL methods can be applied to Big Data’s large variety of data structures with further optimisation work. They propose online learning to combat the velocity (how quickly data is generated). There are many large data sets ranging across a wide selection of categories publicly available which can be used in training and testing networks. Examples are the MNIST database111<http://yann.lecun.com/exdb/mnist> of handwritten digits and the Google Audioset222<https://research.google.com/audioset>, which includes thousands of labelled audio clips. Kaggle333<https://www.kaggle.com> is a platform that hosts ML competitions and maintains public datasets. ### 3.3 Designing Deep Learning Models There are different neural network architectures used in DL, each with their own advantages and disadvantages. Convolutional networks are a type of feedforward network that are designed to process multidimensional signals such as images and video [[36](#bib.bib36)], whereas recurrent networks are adapted to work with sequence data which makes them more difficult to train but applicable to natural language processing (NLP) challenges [[37](#bib.bib37)]. Deep Belief Networks (DBNs) are made up of several layers of restricted Boltzmann Machines (RBMs) and are useful for when the training data set is made up of both labelled and unlabelled entries. They often perform better than networks trained only with backpropagation [[36](#bib.bib36)]. The training distribution and structure can be an important factor in the choice of model and learning method. Supervised learning methods require labelled data and tend to have good results when large quantities of data are available [[29](#bib.bib29)]. They adjust the model’s internal parameters based on the training loss, calculated by comparing the predicted output to the expected output as defined by the data entry’s label. When it comes to unsupervised learning, the ultimate goal is to abstract the raw data in a way that identifies the important factors of variation that apply to all classes. [[30](#bib.bib30)] has had success applying a transductive strategy by using linear models such as Principal Component Analysis (PCA), among others, as some of the network’s layers. Semi-supervised learning makes use of both labelled and unlabelled data. The RBMs that make up a DBN are pre-trained with an unsupervised greedy layer-by-layer algorithm and the whole model is then fine-tuned with labelled data and backpropagation. DBNs often perform better than networks trained solely with backpropagation [[36](#bib.bib36)], as the combination of non-linear layers in a model can be sensitive to the initialisation values. Pre-training, as used with DBNs, can mitigate this sensitivity [[29](#bib.bib29)]. When it comes to optimising a model’s accuracy, tuning the hyperparameters is an important step. They are values that directly influence the training of a neural network by configuring a model’s complexity and the learning process [[38](#bib.bib38)], both of which are critical to the model’s performance. However, finding the ideal values for these parameters can be very difficult as fine-tuning is often based on experience. According to [[30](#bib.bib30)], there are two common ways of optimising a model’s performance through the choice of hyperparameters: manual trial and error and a grid search. Both approaches run into problems when the number of parameters is too large. Readers are referred to [[30](#bib.bib30)] and [[39](#bib.bib39)] for a more efficient optimisation based on random search and greedy exploration. The number and type of parameters differ between models and learning algorithms. Some of the most common include the learning rate, momentum, number of hidden units, number of epochs and batch size. Training large, distributed networks is slow, as the use of parallel resources is very inefficient. [[40](#bib.bib40)] introduces a way to reduce the number of free parameters without dropping the accuracy, as many parameters can be predicted and are, therefore, redundant. Over- and underfitting describe situations where a neural network has not learned the ideal generalisation of the training data which leads to poor performance when new data is introduced. This can also be described as the bias/ variance dilemma, a trade-off between high bias and high variance [[41](#bib.bib41)]. Common metrics such as training and test error are used to analyse the accuracy of a model can help identify over- and underfitting. High variance means that a model fails to differentiate between the signal (the general, underlying pattern) and the noise (dataset-specific randomness) of a dataset. In other words, an overfit model has failed to sufficiently generalise the features of its specific training distribution and therefore performs poorly on previously unseen data, as it has no general knowledge it can apply. Overfitting can occur with a complex model whose learning algorithm has a low bias and a high variance. Cross validation is a proven method of preventing overfitting by stopping training before the specification becomes to high [[42](#bib.bib42)]. The point in time at which to stop training is identified by comparing the model’s accuracy on the training data to its accuracy on the unseen testing data. Training is stopped if the difference starts growing or is deemed too large, also called early stopping. Reducing the number of parameters is another method of combating overfitting [[42](#bib.bib42)]. Dropout layers have also been shown to be successful because they prevent the co-adaption of a network’s hidden units [[29](#bib.bib29)]. They introduce unpredictable noise into the data by dropping random parameters in each training iteration. Bias describes the difference between the model’s expected output and the correct values. It occurs when the model is oversimplified and does not have enough flexibility to capture the underlying relations of features present in the data or when there are insufficient parameters. A model is said to be underfit if it has a low variance but a high bias and can be identified by a high error on both the training and the test data. A possible solution to this problem is changing the model’s structure and parameters so that it better fits the problem to be solved. Bias and variance are inversely related. The ideal model minimises the expected total error of a learning algorithm, which is defined as the sum of squared bias, variance and irreducible error. While bias and variance are reducible, the irreducible error comes from modelling the problem itself. ### 3.4 Deploying Deep Learning Methods There are many open-source tools and frameworks that support DL which can vary greatly in overhead, running speed and number of pre-made components. Following are short descriptions of a small selection of them. TensorFlow444<https://www.tensorflow.org> is a Python-based library with automatic differentiation capabilities that support ML and DL. The high-performance numerical computations, modelled as data flow graphs, can be applied to other domains as well. TensorFlow is used by companies such as Google, Uber, and AMD. PyTorch555<https://pytorch.org> is another such library which enables rapid research on ML networks. The focus lies on extensibility and low overhead, which is possible because the core logic is written in C++. It also supports reverse mode automatic differentiation, which is the most important type of differentiation for DL applications [[43](#bib.bib43)] and distributed training. In 2017, Uber AI Labs released Pyro666<http://pyro.ai>, a deep probabilistic programming language (PPL) based on PyTorch. Caffe777<http://caffe.berkeleyvision.org> is a C++ library that provides interfaces for Python and MATLAB [[44](#bib.bib44)]. It is a clean and modifiable framework, due to the fact that the model’s representation is separate from the model’s implementation [[45](#bib.bib45)]. It is very fast in training convolutional networks and allows for seamless switching between devices (CPU and GPU). MATLAB888https://uk.mathworks.com can be used for DL among other things and allows users to build and analyse models, even with little expert knowledge in DL. It provides access to models such as GoogLeNet and AlexNet and works with models from Caffe and TensorFlow-Keras. MATLAB also supports collaboration with the PyTorch and MXNet frameworks. MXNet999<https://mxnet.apache.org> is a very versatile DL framework which supports imperative and symbolic programming as well as multiple languages, such as C++, Python, R, Scala, MATLAB and JavaScript. Its running speed is similar to Caffe and significantly faster than TensorFlow and it is used by both AWS and Azure, among others [[44](#bib.bib44)]. 4 Practical Considerations of Deep Learning Deployment ------------------------------------------------------- In this section, we discuss the challenges related to deploying DL as part of the cyber security and privacy-preservation mechanism. We discuss three major issues related to the DL, which is in no way an exhaustive list. However, the problems listed in this section have a significant impact on current DL implementations. ### 4.1 Training Data Set Any DL technique requires training to achieve specialisation for a task, therefore the training data set and its structure are very important. There are two crucial elements about the training data set: a) feature-richness and b) trustworthiness. Feature-richness means that the training data should be an extensive collection so that the DL model can identify as many features as possible, which will help it identify genuine and malicious behaviours accurately once it is deployed. Features have to be as extensive as possible; for example, data related to an activity should cover as much information about that activity as possible so a malicious entity has as little room as possible to manoeuvre and trick the deployed DL system. Furthermore, the training data should include a diverse set of behaviours. If a training data set is representative of a behaviour set, the algorithm has a better chance of accurately classifying features in it. However, if the behaviour set is not comprehensive, any behaviour that is not part of the set might be miscategorised fand the DL model might not be able to differentiate between a genuine or malicious behaviour correctly. The reason for this failure is due to the definition or knowledge base for genuine and malicious behaviour is in the behaviour set used for training. Re-enforced learning can be used to accommodate this; however, this can open up a potential avenue for an adversary to modify the behaviour classification of an ML DL. The second crucial element is the data’s trustworthiness. As one of the most important elements of DL, data should be sourced from a trusted environment and this is also true for malicious activities captured (and tagged) in the training data set. The challenge is to capture malicious activities in a trusted manner from a real environment or a lab simulation that accurately depicts how an attacker could behave. Furthermore, the training is carried out on a data set that represents ‘past’ attacks (known attack patterns) and potentially will not be representative of ‘future’ attacks (unknown vulnerability and attack patterns). The challenges related to new and unknown attacks are further discussed in section [5](#S5 "5 Research Challenges for Deep Learning ‣ Deep Learning Application in Security and Privacy – Theory and Practice: A Position Paper"). ### 4.2 Adversarial Samples There is extensive work in academic literature that discusses the impact and limitation of ML/ DL against adversarial samples [[46](#bib.bib46)]. From a deployment point of view, security and privacy practitioners have to keep in mind that a deployed DL system might be susceptible to adversarial samples. This means that an attacker could influence the DL model’s training to learn malicious activities as genuine. By doing so, attackers are enabled to accomplish their goal without DL detecting and flagging them. The challenge related to adversarial samples is crucial as organisations deploying DL based security and privacy mechanism would prefer for it to evolve over time, thereby accommodating the increasing sophistication in the threat landscape. However, allowing the evolution of the DL model after initial training opens it up to adversarial samples. On the other hand, if a DL technique is restricted to the initial training then it is not flexible and extensible, two of the important functions DL should have to cope with the challenges of cybersecurity and privacy. A potential middle ground could be to select a DL technique that is the least susceptible and designed to withstand adversarial samples. Unfortunately, even with such methodologies, the likelihood of adversarial samples cannot be completely removed. Therefore, adversarial samples are a threat vector that will see more sophistication in the future as more and more organisations deploy ML/ DL based cybersecurity and privacy-preservation mechanisms. ### 4.3 General Data Protection Regulation (GDPR) Organisations dealing with EU citizens’ data have to comply with GDPR regulations. GDPR gives a number of rights to consumers, among which are the two that we are going to discuss in this section: Right-to-Know (RtK) and Right-to-Rectification (RtR). Regarding RtK, Article 15.1.h states that “the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject” [[5](#bib.bib5)]. This article requires a meaningful information about the processing method used to process their data. As discussed before, DL is chaotic in many instances and the steps taken to reach a particular decision might have limited traceability or support for reverse-engineering. As an example, a user is in his or her rights to request information on why they received a certain result from an organisation. The organisation then has to explain how the user’s data was processed by the company’s AI to generate that particular result. GDPR also holds firms accountable for bias and discrimination in their automated decisions. The challenge of explaining how DL has reached a specific decision becomes paramount – an aspect of the DL that has not been extensively investigated. To what extent DL’s choice can be explained and whether that is an acceptable and, more importantly, meaningful explanation to the regulatory-authorities and consumer needs to be further researched. RtR (Article 16) states that “[t]he data subject shall have the right to obtain from the controller without undue delay the rectification of inaccurate personal data concerning him or her” [[5](#bib.bib5)]. If a user exercises RtR, they request changes to their personal data stored in the system. How this change in the data will impact previous processing and leaning, which were then based on incorrect data, is still a big question. The challenge is to make DL rectify its input data selectively post processing in a manner that does not require a complete retraining. On a side note, depending on how DL is deployed, the Right-to-Forget, or RtF, (GDPR, Article 17) might have an impact if a sufficient number of consumers/ users request their data to be deleted. At that point, the knowledge set reflecting the behaviour of an organisation’s consumers/ users will not be accurate anymore. How this impacts DL’s subsequent decisions is still unclear and requires further investigation. As a cyberseucrity and privacy practitioner, a clear view of the needs and visions for a DL deployment are necessary. There are plenty of unanswered questions related to DL in terms of research (Section [5](#S5 "5 Research Challenges for Deep Learning ‣ Deep Learning Application in Security and Privacy – Theory and Practice: A Position Paper")), operation, and legislation (GDPR). It is safe to say that this technology has the potential to be beneficial by improving security and privacy-preservation. However, the pertinent question is whether it is ready and mature enough to be deployed extensively as a security and privacy mechanism. The answer to this is complex and depends on multiple factors, including: 1. 1. Organisational requirements and the prioritised security objectives. 2. 2. How the organisation envisions using ML/ DL, keeping in mind that ML/ DL are not silver bullets. 3. 3. Understanding the limitations of ML/ DL and complimenting these techniques with traditional security and privacy measures. 4. 4. Accepting that ML/ DL are in the early stage of development and might go through many improvements in the next few years, therefore deployed systems will have to keep up with rapid change (flexibility, extensibility and scalability). 5 Research Challenges for Deep Learning ---------------------------------------- In this section, we put forward list of relevant topics and questions for ML/ DL research from the perspective of a cybersecurity practitioner. 1. 1. Policy change impact analysis: In an enterprise environment, policies change regularly, and can be related to the security and privacy aspects of the enterprise. The impact assessment of such policies on the enterprise environment is based on human experts’ knowledge. If the enterprise has deployed ML/ DL as a security and privacy measure, policy changes need to be reflected in the ML/ DL method’s learning and execution. To the authors’ knowledge, there is no evaluation of how dynamic policies will impact currently deployed ML/ DL implementations. Therefore, predictive impact analysis of policy changes on DL based security and privacy mechanism would be a important step forward. 2. 2. Defining a new policy: An organisation’s security and privacy objectives are specified by policies and rule-sets. In existing DL, these policies and rule-sets are represented in the labelling of individual records in the training data set. If the policy changes after the deployment of a DL based system, the available option is to generate a new training data set based on the new policies and retraining the DL model. Generating the training data set and retraining can be considered costs in terms of performance and time. The challenge is to cut down this cost and make policy changes as similar to traditional security mechanisms like firewall, access control and IDS, to name a few. 3. 3. Preparing DL to cope with the ‘future’: The cybersecurity and privacy landscape is constantly evolving. To cope with this change, DL has to be flexible and have the ability to learn new patterns even after deployment. Furthermore, prior knowledge already learned by a particular instance of DL is valuable, and the ability to transfer it to other instances (for example among multiple organisations) would vastly improve the readiness of the collective cybersecurty field. A potential path forward could be to develop DL techniques with lifelong learning as crucial part. 4. 4. Isolated or Collaborative Learning: Isolated learning has its pros and cons. The positive side is that as an organisation, your own specific behaviour is profiled. However, this also means that unless you experience a cyber attack, you will not be able to profile it. With collaborative learning, if a single instance of the collaboration experiences a cyber attack, its profiling can then be shared with the other instances in the group. This has the potential of rapidly improving security countermeasures against new and previously unknown attacks. Collaborative learning introduces some additional challenges, such as: * • Knowledge based collaboration: In collaborative learning, should algorithms share their knowledge or simply the raw records of the out-of-profile observation? It also requires a method for sharing prior knowledge between multiple DL instances. * • Raw records based collaboration: Sharing raw records seems simple, as each instance can run its own learning process over it. However, this could leak security sensitive data and violate privacy requirements. For raw records based collaboration, efficient and strong anonymisation techniques have to be developed. This anonymisation technique has to protect privacy and security sensitive data but at the same retain sufficient features so that it is still useful for training other DL instances. 5. 5. Making deep learning forget: There is a number of situations where it is preferable to make the DL de-profile some of the records from its knowledge base. For example, a) the discovery of malicious data in the training data set that is now required to be re-labelled as malicious, b) removing adversarial samples from the DL knowledge and c) if a consumer/ user exercises RtR or right to forget under GDPR. In such situations, DL techniques need to ‘forget’ about certain records. How to achieve this seems to be an open question that will be crucial in a future with increased awareness about privacy in the general public and adversaries successfully training DL implementations with adversarial samples. 6 Conclusion ------------- In this paper, we briefly explore the potential, practicality, implications and shortcomings of DL mechanisms in fields such as security and privacy preservation mechanisms. There are numerous proposals in academic literature that advocate the success of DL as an effective mechanism for cybersecurity. We do not evaluate their claims in this paper. We view DL as a mature domain and evaluate how a security practitioner would go about deploying it, what challenges and issues they would have to overcome and what options are available to resolve some of these issues. We do consider that DL has come a long way and can potentially be applied to security and privacy functions with a defined set of static behaviours. In such situations, DL can efficiently detect any behavioural violations with high accuracy. However, it is too early to consider it an extensively useable security measure in its own right. DL has a long way to go before it is mature enough to be deployed as a standalone Unified Threat Management (UTM) environment. In this paper, we have discussed the aspects of DL an organisation should keep in mind when deploying a DL based solution. In addition, we have also included a list of features that would be useful to security practitioners in number of scenarios if they can be provided by the DL base mechanisms. In conclusion, DL has a lot of promise and with the right features, it could become an impactful tool in the security and privacy arsenal. With the increase of sophistication and complexity of future technology in the current infrastructure, AI-based security and privacy countermeasures (ML/ DL) might be the next logical step. For this reason, cybersecurity researchers have to become active participants in the ML/ DL evolution, rather then just deploying them to security and privacy problems as off-the-shelf kits.
281bb35b-bf4e-4e53-a586-5cdba13bec3b
trentmkelly/LessWrong-43k
LessWrong
FTL travel summary I started writing this 2 years ago, got bored, and never finished. Posting it now just to get it out of my drafts. Desirability of fast travel If a superintelligent AI gains power over the physical world, it's very likely to want to expand its influence quickly. This is most obvious for a misaligned SAI such as a paperclip maximizer; if it has a utility function that rewards it for making as many paperclips in a short time frame, it will want to gain access to as large a portion of the universe as possible in a short time. Perfectly aligned (or only slightly misaligned) SAIs will also want to expand quickly. Even setting aside questions of whether we should colonize the universe, any well-aligned SAI will have a goal of preserving humanity's existence, which means preventing all other civilizations from creating their own misaligned SAIs. The most straightforward way to do this would be to send out probes to other planets that ensure that those other civilizations do not create an SAI that poses a threat to humanity. The faster those probes can move, the better their chance of preventing the rise of SAI with conflicting goals. If another civilization has already created an SAI with a conflicting goal and they come into contact, there are many ways that could go down, but in most scenarios it will be an advantage to control more of the universe than the other superintelligence does. This again suggests the desirability of developing maximally-fast ways to move across the universe. FTL travel also increases the amount of the universe we have access to, potentially to an infinite degree. This means that any future civilization that wants to access more negentropy will want to use FTL. Relevance of FTL Given the likelihood that superintelligence leads to fast expansion in some form, the question of whether faster-than-light travel is possible has implications for several current questions, and many more far-future ones. For example: 1. The traditional Fermi Par
e3e90322-bec1-47e5-acd8-22d1954d8b3c
trentmkelly/LessWrong-43k
LessWrong
Your transhuman copy is of questionable value to your meat self. I feel safe saying that nearly everyone reading this will agree that, given sufficient technology, a perfect replica or simulation could be made of the structure and function of a human brain, producing an exact copy of an individual mind including a consciousness.  Upon coming into existence, this consciousness will have a separate but baseline-identical subjective experience to the consciousness from which it was copied, as it was at the moment of copying.  The original consciousness will continue its own existence/ subjective experience.  If the brain containing the original consciousness is destroyed, the consciousness within ceases to be.  The existence or non- of a copy is irrelevant to this fact. With this in mind, I fail to see the attraction of the many transhuman options for extra-meat existence, and I see no meaningful immortality therein, if that's what you came for. Consciousness is notoriously difficult to define and analyze and I am far from an expert in it's study.  I define it as an awareness: the sense organ which perceives the activity of the mind.  It is not thought.  It is not memory or emotion.  It is the thing that experiences or senses these things.  Memories will be gained and lost, thoughts and emotions come and go, the sense of self remains even as the self changes.  There exists a system of anatomical structures in your brain which, by means of electrochemical activity, produces the experience of consciousness.  If a brain injury wiped out major cognitive functions but left those structures involved in the sense of consciousness unharmed, you would, I believe, have the same central awareness of Self as Self, despite perhaps lacking all language or even the ability to form thoughts or understand to world around you.  Consciousness, this awareness, is, I believe, the most accurate definition of Self, Me, You.  I realize this sort of terminology has the potential to sound like mystical woo.  I believe this is due to the twin effects of the
055c42f7-4f4e-4266-bc63-5f806e20645c
trentmkelly/LessWrong-43k
LessWrong
[Optimal Philanthropy] Laptops without instructions Just read this article, which describes a splashy, interesting narrative which jives nicely with my worldview. Which makes me suspicious. http://dvice.com/archives/2012/10/ethiopian-kids.php > The One Laptop Per Child project started as a way of delivering technology and resources to schools in countries with little or no education infrastructure, using inexpensive computers to improve traditional curricula. What the OLPC Project has realized over the last five or six years, though, is that teaching kids stuff is really not that valuable. Yes, knowing all your state capitols how to spell "neighborhood" properly and whatnot isn't a bad thing, but memorizing facts and procedures isn't going to inspire kids to go out and learn by teaching themselves, which is the key to a good education. Instead, OLPC is trying to figure out a way to teach kids to learn, which is what this experiment is all about. > > Rather than give out laptops (they're actually Motorola Zoom tablets plus solar chargers running custom software) to kids in schools with teachers, the OLPC Project decided to try something completely different: it delivered some boxes of tablets to two villages in Ethiopia, taped shut, with no instructions whatsoever. Just like, "hey kids, here's this box, you can open it if you want, see ya!" > > Just to give you a sense of what these villages in Ethiopia are like, the kids (and most of the adults) there have never seen a word. No books, no newspapers, no street signs, no labels on packaged foods or goods. Nothing. And these villages aren't unique in that respect; there are many of them in Africa where the literacy rate is close to zero. So you might think that if you're going to give out fancy tablet computers, it would be helpful to have someone along to show these people how to use them, right? > > But that's not what OLPC did. They just left the boxes there, sealed up, containing one tablet for every kid in each of the villages (nearly a thousand tablets in tot
34923c06-11bc-4591-b2e0-01a6f7ca891f
trentmkelly/LessWrong-43k
LessWrong
Is Progress Real? ... I couldn't find the essay “Is Progress Real?” by the historians Will and Ariel Durant (The Lessons of History, 1968) anywhere on the internet so here I am now, posting it on the internet. I think it's a classic, short but filled with provoking ideas and beautiful prose and I don’t think it’s an exaggeration to say that no one writes like this anymore. Afterwards, I offer some commentary on the essay and make a devastating criticism of the idea of progress (a death blow, really). Enjoy. ---------------------------------------- Is Progress Real? Against this panorama of nations, morals, and religions rising and falling, the idea of progress finds itself in dubious shape. Is it only the vain and traditional boast of each "modern'" generation? Since I have admitted no substantial change in man's nature during historic times, all technological advances will have to be written off as merely new means of achieving old ends—the acquisition of goods, the pursuit of one sex by the other (or by the same), the overcoming of competition, the fighting of wars. One of the discouraging discoveries of our disillusioning century is that science is neutral: it will kill for us as readily as it will heal, and will destroy for us more readily than it can build. How inadequate now seems the proud motto of Francis Bacon, "Knowledge is power"! Sometimes we feel that the Middle Ages and the Renaissance, which stressed mythology and art rather than science and power, may have been wiser than we, who repeatedly enlarge our instrumentalities without improving our purposes.  Our progress in science and technique has involved some tincture of evil with good. Our comforts and conveniences may have weakened our physical stamina and our moral fiber. We have immensely developed our means of locomotion, but some of us use them to facilitate crime and to kill our fellow men or ourselves. We double, triple, centuple our speed, but we shatter our nerves in the process, and are the same trousere
a9c99f36-50f5-4608-af9e-59ce5879864b
trentmkelly/LessWrong-43k
LessWrong
[AN #58] Mesa optimization: what it is, and why we should care Find all Alignment Newsletter resources here. In particular, you can sign up, or look through this spreadsheet of all summaries that have ever been in the newsletter. I'm always happy to hear feedback; you can send it to me by replying to this email. Highlights Risks from Learned Optimization in Advanced Machine Learning Systems (Evan Hubinger et al): Suppose you search over a space of programs, looking for one that plays TicTacToe well. Initially, you might find some good heuristics, e.g. go for the center square, if you have two along a row then place the third one, etc. But eventually you might find the minimax algorithm, which plays optimally by searching for the best action to take. Notably, your outer optimization over the space of programs found a program that was itself an optimizer that searches over possible moves. In the language of this paper, the minimax algorithm is a mesa optimizer: an optimizer that is found autonomously by a base optimizer, in this case the search over programs. Why is this relevant to AI? Well, gradient descent is an optimization algorithm that searches over the space of neural net parameters to find a set that performs well on some objective. It seems plausible that the same thing could occur: gradient descent could find a model that is itself performing optimization. That model would then be a mesa optimizer, and the objective that it optimizes is the mesa objective. Note that while the mesa objective should lead to similar behavior as the base objective on the training distribution, it need not do so off distribution. This means the mesa objective is pseudo aligned; if it also leads to similar behavior off distribution it is robustly aligned. A central worry with AI alignment is that if powerful AI agents optimize the wrong objective, it could lead to catastrophic outcomes for humanity. With the possibility of mesa optimizers, this worry is doubled: we need to ensure both that the base objective is aligned with humans (calle
1db4e567-e472-4ca3-a981-25a78c8d46c2
trentmkelly/LessWrong-43k
LessWrong
The meta-evaluation question Evaluation refers to an agent's evaluation of the expected benefit of a particular action; meta-evaluation is the agent's evaluation of the computational cost-effectiveness of evaluation in general. There are two difficulties with meta-evaluation.  One is the nature of the data, which by nature will consist of unique events*.  The second is the obvious self-referential nature of the problem. Meta-evaluation is a central issue for any theory of rational decision-making, yet I have not yet seen it directly addressed here.   *This quality of nonredundancy occurs in its purest form in "mathematical data," ie the distribution of primes.
e967e4c4-315b-45d5-9628-0dcd54c86710
trentmkelly/LessWrong-43k
LessWrong
[Linkpost] Critiques of Redwood Research An anonymous user named Omega posted a critique of Redwood on the EA Forum. The post highlights four main areas: (1) Lack of senior ML staff, (2) Lack of communication & engagement with the ML community, (3) Underwhelming research output, and (4) Work culture issues. I'm linkposting it here, since I imagine some LW users will have thoughts/comments. See also this comment from Nate Thomas, and note that Redwood has an anonymous  feedback form.  > We believe that Redwood has some serious flaws as an org, yet has received a significant amount of funding from a central EA grantmaker (Open Philanthropy). Inadequately kept in check conflicts of interest (COIs) might be partly responsible for funders giving a relatively immature org lots of money and causing some negative effects on the field and EA community. We will share our critiques of Constellation (and Open Philanthropy) in a follow-up post. We also have some suggestions for Redwood that we believe might help them achieve their goals. > > Redwood is a young organization that has room to improve. While there may be flaws in their current approach, it is possible for them to learn and adapt in order to produce more accurate and reliable results in the future. Many successful organizations made significant pivots while at a similar scale to Redwood, and we remain cautiously optimistic about Redwood's future potential. Standard caveat that I don't agree with everything in the post or even endorse its main conclusions; also see my comment.
d3305a5a-3d69-4c29-9a5a-5130bd25ceca
trentmkelly/LessWrong-43k
LessWrong
Transformer Debugger Transformer Debugger (TDB) is a tool developed by OpenAI's Superalignment team with the goal of supporting investigations into circuits underlying specific behaviors of small language models. The tool combines automated interpretability techniques with sparse autoencoders. TDB enables rapid exploration before needing to write code, with the ability to intervene in the forward pass and see how it affects a particular behavior. It can be used to answer questions like, "Why does the model output token A instead of token B for this prompt?" or "Why does attention head H to attend to token T for this prompt?" It does so by identifying specific components (neurons, attention heads, autoencoder latents) that contribute to the behavior, showing automatically generated explanations of what causes those components to activate most strongly, and tracing connections between components to help discover circuits. These videos give an overview of TDB and show how it can be used to investigate indirect object identification in GPT-2 small: * Introduction * Neuron viewer pages * Example: Investigating name mover heads, part 1 * Example: Investigating name mover heads, part 2   Contributors: Dan Mossing, Steven Bills, Henk Tillman, Tom Dupré la Tour, Nick Cammarata, Leo Gao, Joshua Achiam, Catherine Yeh, Jan Leike, Jeff Wu, and William Saunders. Thanks to Johnny Lin for contributing to the explanation simulator design.
54a2669e-aea5-4d05-ade2-ad999b038f0b
trentmkelly/LessWrong-43k
LessWrong
Temporal allies and spatial rivals (This post co-authored by Robin Hanson and Katja Grace.) In the Battlestar Galactica TV series, religious rituals often repeated the phrase, “All this has happened before, and all this will happen again.” It was apparently comforting to imagine being part of a grand cycle of time. It seems less comforting to say “Similar conflicts happen out there now in distant galaxies.” Why? Consider two possible civilizations, stretched either across time or space: * Time: A mere hundred thousand people live sustainably for a billion generations before finally going extinct. * Space: A trillion people spread across a thousand planets live for only a hundred generations, then go extinct. Even though both civilizations support the same total number of lives, most observers probably find the time-stretched civilization more admirable and morally worthy. It is “sustainable,” and in “harmony” with its environment. The space-stretched civilization, in contrast, seems “aggressively” expanding and risks being an obese “repugnant conclusion” scenario. Why? Finally, consider that people who think they are smart are often jealous to hear a contemporary described as “very smart,” but are much happier to praise the genius of a Newton, Einstein, etc. We are far less jealous of richer descendants than of richer contemporaries. And there is far more sibling rivalry than rivalry with grandparents or grandkids. Why? There seems an obvious evolutionary reason – sibling rivalry makes a lot more evolutionary sense. We compete genetically with siblings and contemporaries far more than with grandparents or grandkids. It seems that humans naturally evolved to see their distant descendants and ancestors as allies, while seeing their contemporaries more as competitors. So a time-stretched world seems choc-full of allies, while a space-stretched one seems instead full of potential rivals, making the first world seem far more comforting. Having identified a common human instinct about what to admi
39e8f19c-d540-44b6-acea-dace4b9a7be6
StampyAI/alignment-research-dataset/special_docs
Other
Power to the People: The Role of Humans in Interactive Machine Learning   Power to the People: The Role of Humans in Interactive Machine Learning Saleema Amershi, Maya Cakmak, W. Bradley Knox, Todd Kulesza1 Abstract Systems that can learn interactively from their end-users are quickly becoming widespread. Until recently, this progress has been fueled mostly by advances in machine learning; however, more and more researchers are realizing the importance of studying users of these systems. In this article we promote this approach and demonstrate how it can result in better user experiences and more effective learning systems. We present a number of case studies that demonstrate how interactivity results in a tight coupling between the system and the user, exemplify ways in which some existing systems fail to account for the user, and explore new ways for learning systems to interact with their users. After giving a glimpse of the progress that has been made thus far, we discuss some of the challenges we face in moving the field forward. Introduction Machine learning is a powerful tool for transforming data into computational models that can drive user-facing applications. However, potential users of such applications, who are often domain experts for the application, have limited involvement in the process of developing them. The intricacies of applying machine learning techniques to everyday problems has largely restricted their use to skilled practitioners. In the traditional applied machine learning workflow, these practitioners collect data, select features to represent the data, pre-process and transform the data, choose a representation and learning algorithm to construct the model, tune parameters of the algorithm, and finally assess the quality of the resulting model. This assessment often leads to further iterations on many of the previous steps. Typically, any end-user involvement in this process is mediated by the practitioners and is limited to providing data, answering domain-related questions, or giving feedback about the learned model. This results in a design process with lengthy and asynchronous iterations and limits the end-users’ ability to impact the resulting models. Consider the following case study of machine-learning practitioners working with biochemists to develop a protein taxonomy by clustering low-level protein structures (Caruana et al. 2006). The project lead recounted their experience in an invited talk at the IUI 2013 Workshop on Interactive Machine Learning (Amershi et al. 2013). First, the practitioners would create a clustering of the protein structures. Then, they would meet with the biochemists to discuss the results. The biochemists would critique the results (e.g., “these two proteins should / should not be in the same cluster” or “this cluster is too small”), providing new constraints for the next                                                                                                                1  All authors contributed equally.  iteration. Following each meeting, the practitioners would carefully adjust the clustering parameters to adhere to the given constraints and re-compute clusters for the next meeting. Frustrated by the inefficiency of this laborious process, Caruana et al. went on to develop learning algorithms that enable interactive exploration of the clustering space and incorporation of new clustering constraints (Cohn et al. 2003, Caruana et al. 2006). These algorithms were intended to give people the ability to rapidly iterate and inspect many alternative clusterings within a single sitting. Their later approach is an example of interactive machine learning, where learning cycles involve more rapid, focused, and incremental model updates than in the traditional machine learning process. These properties enable everyday users to interactively explore the model space through trial-and-error and drive the system towards an intended behavior, reducing the need for supervision by practitioners. Consequently, interactive machine learning can facilitate the democratization of applied machine learning, empowering end-users to create machine learning-based systems for their own needs and purposes. However, enabling effective end-user interaction with interactive machine learning introduces new challenges that require a better understanding of end-user capabilities, behaviors, and needs. This article promotes the empirical study of the users of interactive machine learning systems as a method for addressing this challenge. Through a series of case studies, we illustrate the following propositions: 1) Rapid, focused and incremental learning cycles result in a tight coupling between the user and the system, where the two influence one another. As a result it is difficult to decouple their influence on the resulting model and study such systems in isolation. 2) Explicitly studying user interaction can challenge assumptions of traditional learning systems about users and better inform the design of interactive learning systems. 3) The ways in which end-users interact with learning systems can be expanded to ways in which practitioners do (e.g., tuning parameters or defining new constraints); however, novel interaction techniques should be carefully evaluated with potential end-users. While the presented case studies paint a broad picture of recent research in user interaction with interactive machine learning, this article does not exhaustively survey the literature in this space. Rather, these case studies are selected to highlight the role and importance of the user within the interactive machine learning process, serving as an introduction to the topic and a vehicle for considering this body of research altogether. We conclude this article with a discussion of the current state of the field, identifying opportunities and open challenges for future interactive machine learning research. Interactive Machine Learning The applied machine learning workflow often involves long and complex iterations. The process starts with data provided by domain experts or specifically collected for the target application. Machine learning practitioners then work with domain experts to identify features to represent the data. Next, the practitioners experiment with different machine learning algorithms,  iteratively tuning parameters, tweaking features, and sometimes collecting more data to improve target performance metrics. Results are then further examined both by practitioners and domain experts to inform the subsequent iteration. At the end of this long cycle, the model is updated in several ways and can be drastically different from the previous iteration. Furthermore, this iterative exploration of the model space is primarily driven by the machine learning practitioners, who rely on their understanding of machine learning techniques to make informed model updates in each iteration. In contrast, model updates in interactive machine learning are more rapid (the model gets updated immediately in response to user input), focused (only a particular aspect of the model is updated), and incremental (the magnitude of the update is small; the model does not change drastically with a single update). This allows users to interactively examine the impact of their actions and adapt subsequent inputs to obtain desired behaviors. As a result of these rapid interaction cycles, even users with little or no machine learning expertise can steer machine-learning behaviors via low-cost trial-and-error or focused experimentation with inputs and outputs. Figure 1 illustrates traditional applied machine learning and interactive machine learning, highlighting their contrasting characteristics. Perhaps the most familiar examples of interactive machine learning in real-world applications  Figure 1: In machine learning, people iteratively supply information to a learning system and then observe and interpret the outputs of the system to inform subsequent iterations. In interactive machine learning, these iterations are more focused, frequent and incremental than traditional machine learning. The tighter interaction between users and learning systems in interactive machine learning necessitates an increased focus on studying the user’s involvement in the process.  are recommender systems such as Amazon product recommendations, Netflix movie recommendations, and Pandora music recommendations. Users of recommender systems are often asked targeted questions about their preferences for individual items2 (which they provide by ‘liking’ or ‘disliking’ them, for example). These preferences are then promptly incorporated in the underlying learning system for subsequent recommendations. If a recommender system begins recommending undesired items after incorporating new preferences, the user may attempt to redirect the system by correcting it or providing different preference information in the future. We next present two case studies that exemplify the interactive machine learning process and demonstrate its potential as an end-user tool. Interactive machine learning for image segmentation Fails and Olsen (2003) were the first to introduce the term interactive machine learning in the human-computer interaction community, characterizing it with rapid train-feedback-correct cycles, where users iteratively provide corrective feedback to a learner after viewing its output. They demonstrated this process with their Crayons system, which allowed users with no machine learning background to train pixel classifiers by iteratively marking pixels as foreground or background through brushstrokes on an image. After each user interaction, the system responded with an updated image segmentation for further review and corrective input. Evaluations of Crayons via user studies revealed that the immediate output provided by the system allowed users to quickly view and correct misclassifications by adding new training data in the most problematic areas. As illustrated in Figure 2, after an initial classification, the user                                                                                                                2 In this article we examine interactive machine learning systems in which the human is consciously interacting with the machine learner in order to improve it. That is, we do not consider interactive machine learning systems that obtain user feedback implicitly (e.g., websites that may automatically adapt their presentation to a user’s click history without their knowledge).                                    Figure 2:  Interactive training of the Crayons system (Fails & Olsen 2003). The system takes pixels labeled as background/foreground as input (provided through brush strokes), and gives a fully segmented image as output (obtained through a classifier that labels each pixel as foreground/background). The user’s input is focused on areas where the classifier is failing in previous iterations.  provides Crayons with more data at the edges of the hand where the classifier failed. When asked what they were thinking while interacting with the system, most users stated that they were focused on seeing parts of the image that were classified incorrectly. Fails and Olsen’s work on Crayons demonstrated that users modify their behavior based on a learner’s outputs, which is an underlying premise for much of the following research on interactive machine learning. Interactive machine learning for gesture-based music Another example of an interactive machine learning system comes from the realm of music composition and performance. This domain is naturally interactive: musicians are accustomed to receiving immediate feedback when interacting with a musical instrument. Fiebrink and colleagues (2011) developed the Wekinator, a machine learning system for enabling people to interactively create novel gesture-based instruments, such as moving an arm in front of a web camera to produce different sounds based on the arm’s position, speed, or rotation. In this system, a neural network receives paired gestures and sounds from the user as input and learns how to interpolate from unobserved gesture positions to a range of sounds. Users evaluate their instruments directly by gesturing and assessing the produced sounds. While observing students using Wekinator in an interdisciplinary music and computer science course, the authors found that as students trained their respective instruments, the interactive nature of the system also helped train the students. For example, the students learned how to recognize noise in their training samples and provide clearer examples to the learner. In some cases, students even adjusted their goals to match the observed capabilities of the learner. In a follow-up investigation with a professional cellist (Fiebrink et al. 2011), the cellist identified flaws in her playing technique while trying to train a gesture recognizer. The process revealed that the cellist’s bowing articulation was not as precise as she had believed. By observing the outputs of the system in real-time, Wekinator users were able to modify their behavior in ways that allowed them to create instruments to their satisfaction. Summary These examples illustrate the rapid, focused, and incremental interaction cycles fundamental to interactive machine learning; it is these cycles that facilitate end-user involvement in the machine learning process. These cycles also result in a tight coupling between user and the system, making it impossible to study the system in isolation from the user. This necessitates an increased focus on studying how users can effectively influence the machine learning system and how the learning system can appropriately influence the users. The following section examines how explicitly studying end-users can challenge assumptions of traditional machine learning and better inform the development of interactive machine learning systems. Many of the case studies to follow additionally consider less traditional types of input and output, moving beyond labeled examples and observations of learner predictions. Studying User Interaction with Interactive Machine Learning The increased interaction between users and learning systems in interactive machine learning necessitates an increased understanding of how end-user involvement impacts the learning  process. In this section, we present case studies illustrating how such an understanding can ultimately lead to better-informed system designs. First, we present case studies demonstrating how people may violate assumptions made by traditional machine learners, resulting in unexpected outcomes and user frustration. Next, we present case studies indicating that people may want to interact with machine learning systems in richer ways than anticipated, suggesting new input and output capabilities. Finally, we present case studies that experiment with increasing transparency about how machine learning systems work, finding that such transparency can improve the user experience in some scenarios, as well as the accuracy of resulting models. Users are people, not oracles Active learning is a machine learning paradigm in which the learner chooses the examples from which it will learn (Settles 2010). These examples are selected from a pool of unlabeled samples based on some selection criterion (e.g., samples for which the learner has maximum uncertainty). For each selected sample the learner queries an oracle to request a label. This method has had success in accelerating learning (i.e., requiring fewer labels to reach a target accuracy) in applications like text classification and object recognition, where oracles are often paid to provide labels over a long period of time. However, Cakmak and colleagues (2010) discovered that when applied to interactive settings, such as a person teaching a task to a robot by example, active learning can cause several problems. Cakmak's study (Figure 3) found that the constant stream of questions from the robot during the interaction was perceived as imbalanced and annoying. The stream of questions also led to a decline in the user’s mental model of how the robot learned, causing some participants to "turn their brain off" or "lose track of what they were teaching" (according to their self report) (Cakmak et al. 2010). Guillory and Bilmes (2011) reported similar findings for an active movie recommendation system they developed for Netflix. These studies reveal that users are not necessarily willing to be simple oracles (i.e., repeatedly telling the computer whether it is right or wrong), breaking a fundamental assumption of active learning. Instead, these systems need to                                    Figure 3: Users teaching new concepts to a robot by providing positive and negative examples. (Left) Passive learning: examples are chosen and presented by the user. (Right) Active learning: particular examples are requested by the learner. Although active learning results in faster convergence, users get frustrated from having to answer the learner’s long stream of questions and not having control over the interaction.  account for human factors such as interruptibility or frustration when employing methods like active learning. People tend to give more positive than negative feedback to learners In reinforcement learning, an agent senses and acts in a task environment and receives numeric reward values after each action. With this experience, the learning agent attempts to find behavioral policies that improve its expected accumulation of reward. A number of research projects have investigated the scenario in which this reward comes as feedback from a human user rather than a function predefined by an expert (Isbell et al. 2006, Thomaz and Breazeal 2008, Knox and Stone 2012). In evaluating the feasibility of non-expert users teaching through reward signals, these researchers aimed to both leverage human knowledge to improve learning speed and permit users to customize an agent’s behavior to fit their own needs. Thomaz and Breazeal (2008) observed that people have a strong tendency to give more positive rewards than negative rewards. Knox and Stone (2012) later confirmed this positive bias in their own experiments. They further demonstrated that such bias leads many agents to avoid the goal that users are teaching it to reach (e.g. the water in Figure 4). This undesirable consequence occurs with a common class of reinforcement learning algorithms: agents that value reward accrued over the long term and are being taught to complete so-called episodic tasks. This insight provided justification for the previously popular solution of making agents that hedonistically pursue only short-term human reward, and it led Knox and Stone to create an algorithm that successfully learns by valuing human reward that can be gained in the long-term (2013). Agents trained through their novel approach were more robust to environmental changes and behaved more appropriately in unfamiliar states than did more hedonistic (i.e., myopic) variants. These agents and the algorithmic design guidelines Knox and Stone created were the result of multiple iterations of user studies, which identified positive bias and then verified its hypothesized effects.                                    Figure 4: Two task domains for reinforcement learning agents taught by human users. (Left) A cooking robot that must pick up and use the ingredients in an acceptable order (Thomaz and Breazeal, 2006). The green vertical bar displays positive feedback given by a click-and-drag interface. (Right) A simulated robot frog that users teach how to navigate to the water (Knox and Stone, 2012).  People want to demonstrate how learners should behave In an experiment by Thomaz and Breazeal (2008) users trained a simulated agent to bake a cake through a reinforcement learning framework. In their interface, users gave feedback to the learner by clicking and dragging a mouse—longer drags gave larger-magnitude reward values, and the drag direction determined the valence (+/-) of the reward value (Figure 4). Further, users could click on specific objects to signal that the feedback was specific to that object, but they were told that they could not communicate which action the agent should take. Thomaz and Breazeal found evidence that people nonetheless gave positive feedback to objects that they wanted the agent to manipulate, such as an empty bowl which the agent is in position to pick up. These users violated the instructions by applying what could be considered an irrelevant degree of freedom—giving feedback to objects that had not been recently manipulated—to provide guidance to the agent about future actions, rather than actual feedback about previous actions. After Thomaz and Breazeal adapted the agent's interface and algorithm to incorporate such guidance, the agent's learning performance significantly improved. Other researchers have reached similar conclusions. In a Wizard-of-Oz study (i.e., the agent’s outputs were secretly provided by a human) by Kaochar et al. (2011), users taught a simulated unmanned aerial vehicle (UAV) to conduct various missions. At any time, these users chose whether to teach by demonstration, by feedback, or by providing an example of a concept. They could also test the agent to see what it had learned. The authors found that users never taught exclusively by feedback, instead generally using it after teaching by the other available means. Together, these two studies provide insight into the design of natural interfaces for teaching agents. People naturally want to provide more than just data labels Labeling data remains the most popular method for end-user input to interactive machine learning systems because of its simplicity and ease-of-use. However, as demonstrated in previous case studies, label-based input can have drawbacks (e.g., negative attitudes towards being treated as an oracle). In addition, emerging research suggests that in some scenarios users may desire richer control over machine learning systems than simply labeling data. For example, Stumpf et al. (2007) conducted a study to understand the types of input end-users might provide to machine learning systems if unrestricted by the interface. The authors generated three types of explanations for predictions from a text classification system operating over email messages. These explanations were presented to people in the form of paper-based mockups to avoid the impression of a finished system and encourage people to provide more feedback. People were then asked to give free-form feedback on the paper prototypes with the goal of trying to correct the classifier’s mistakes. This experiment generated approximately 500 feedback instances from participants, which were then annotated and categorized. The authors found that people naturally provided a wide variety of input types to improve the classifier’s performance, including suggesting alternative features to use, adjusting the importance or weight given to different features, and modifying the information extracted from the text. These results present an opportunity to develop new machine learning algorithms that might better  support the natural feedback people want to provide to learners, rather than forcing users to interact in limited, learner-centric ways. People value transparency in learning systems In addition to wanting richer controls, people sometimes desire more transparency about how their machine learning systems work. Kulesza et al. (2012) provided users of a content-based music recommender with a 15-minute tutorial discussing how the recommender worked and how various feedback controls (e.g., rating songs, steering towards specific feature values, etc.) would impact the learner. Surprisingly, participants responded positively to learning these details about the system. In addition, the researchers found that the more participants learned about the recommender while interacting with it, the more satisfied they were with the recommender’s output. This case study provides evidence that users are not always satisfied by “black box” learning systems—sometimes they want to provide nuanced feedback to steer the system, and they are willing and able to learn details about the system to do so. Examining transparency at a more social level, Rashid et al. (2006) examined the effect of showing users the value of their potential movie ratings to a broader community in the MovieLens recommendation system. Users who were given information about the value of their contribution to the entire MovieLens community provided more ratings than those who were not given such information, and those given information about value to a group of users with similar tastes gave more ratings than those given information regarding the full MovieLens community. Transparency can help people provide better labels Sometimes users make mistakes while labeling, thus providing false information to the learner. Although most learning systems are robust to the occasional human error, Rosenthal and Dey set out to solve this problem at the source. They sought to reduce user mistakes by providing targeted information when a label is requested in an active learning setting. The information provided to the user included a combination of contextual features of the sample to be labeled, explanations of those features, the learner's own prediction of the label for the sample, and its uncertainty in this prediction (Rosenthal & Dey, 2010). They conducted two studies to determine the subset of such information that is most effective in improving the labeling accuracy of users. The first involved people labeling strangers’ emails into categories, as well as labeling the interruptability of strangers' activities; the second involved people labeling sensory recordings of their own physical activity. Both studies found that the highest labeling accuracy occurred when the system provided sufficient contextual features and current predictions without uncertainty information. This line of research demonstrates that the way in which information is presented (e.g., with or without context) can greatly impact the quality of the response elicited from the user. This case study also shows that not all types of transparency improve the performance of interactive machine learning systems, and user studies can help determine what information is most helpful to the intended audience. Summary Understanding how people actually interact—and want to interact—with machine learning systems is critical to designing systems that people can use effectively. Exploring interaction  techniques through user studies can reveal gaps in a designer’s assumptions about their end-users and may suggest new algorithmic solutions. In some of the cases we reviewed, people naturally violated assumptions of the machine learning algorithm or were unwilling to comply with them. Other cases demonstrated that user studies can lead to helpful insights about the types of input and output that interfaces for interactive machine learning should support. In general, this type of research can produce design suggestions and considerations, not only for people building user interfaces and developing the overall user experience, but for the machine learning community as well. Novel Interfaces for Interactive Machine Learning As many of the case studies in the previous section showed, end-users often desire richer involvement in the interactive machine learning process than labeling instances. In addition, research on cost-benefit tradeoffs in human-computer interaction has shown that people will invest time and attention into complex tasks if they perceive their efforts to have greater benefits than costs (Blackwell 2002). For example, research on end-user programming has shown that end-users program often (e.g., via spreadsheets, macros, or mash-ups), but do so primarily to accomplish some larger goal (Blackwell 2002). The act of programming is an investment, and the expected benefit is using the program to accomplish their goal sooner or with less effort than doing it manually. Similarly, this theory suggests that people will invest time to improve their machine learners only if they view the task as more beneficial than costly or risky—i.e., when they perceive the benefits of producing an effective learner as outweighing the costs of increased interaction. Therefore, we believe there is an opportunity to explore new, richer interfaces that can leverage human knowledge and capabilities more efficiently and effectively. In this section, we present case studies that explore novel interfaces for interactive machine learning systems and demonstrate the feasibility of richer interactions. Interface novelty in these cases can come from new methods for receiving input or providing output. New input techniques can give users more control over the learning system, allowing them to move beyond labeling examples. Such input techniques include methods for feature creation, reweighting of features, adjusting cost matrices, or modifying model parameters. Novel output techniques can make the system’s state more transparent or understandable. For example, a system could group unlabeled data to help users label the most informative items, or it could communicate uncertainty about the system’s predictions. These case studies also reinforce our proposition that interactive machine learning systems should be evaluated with potential end-users. Such evaluations are needed both to validate that these systems perform well with real users and to gain insights for further improvement. Many of the novel interfaces detailed in this section were found to be beneficial, but some of the case studies also demonstrate that certain types of input or output may lead to obstacles for the user or reduce the accuracy of the resulting learner. Therefore, novel interfaces should be designed with care and appropriately evaluated before deployment. Supporting assessment of model quality In each iteration of the interactive machine learning process, the user may assess the quality of the current model and then decide how to proceed with further input. A common technique for  conveying model quality in supervised learning is to present a person with all of the unlabeled data sorted by their predicted scores for some class (e.g., classification probabilities or relevance rankings). After evaluating this presentation, a person then decides how to proceed in training by selecting additional examples to label for further input. Although straightforward, this technique inefficiently illustrates learner quality and provides the user with no guidance in selecting additional training examples. Fogarty et al. (2008) investigated novel techniques for presenting model quality in CueFlik, an interactive machine learning system for image classification. Via a user study, the authors demonstrated that a technique of presenting users with only the best- and worst-matching examples enabled users to more quickly evaluate model quality and, in turn, train significantly better models than the standard technique of presenting the user with all of the data. In a follow up investigation with CueFlik, Amershi et al. (2009) went on to show that presentation techniques designed to summarize model quality for users while providing them with high-value examples to choose from as further input to the model led users to train better models than the best- and worst-matching technique from previous work. These case studies demonstrate that presentation matters and designing interfaces that balance the needs of both end-users and machine learners is more effective than optimizing user interfaces for end-users in isolation. Supporting experimentation with model inputs Interactive machine learning enables rapid and incremental iterations between the end-user and the machine learner. As a result, users may want to experiment with alternative inputs and examine resulting model outputs before committing to any model input. To support end-user experimentation, Amershi et al (2010) augmented the CueFlik system discussed previously with a history visualization to facilitate model comparison and support for model revision (via  Figure 5. Fogarty et al.’s work with CueFlik compared two methods of illustrating the quality of a machine-learned visual concept. The standard method (left) presented users with examples ranked by their likelihood of membership to the positive class. The best and worst matches method (right) instead showed examples predicted as positive or negative with high certainty by CueFlik. A user study showed that the best- and worst-matches technique led users to train significantly better learners than the standard presentation.  undo/redo, removing labels, and reverting back to previous models using the history visualization). In a user study, Amershi et al. showed that end-users used revision when it was available and this led them to achieve better final models in the same amount of time (even while performing more actions) compared to when these supports were unavailable. Furthermore, being able to examine and revise actions is consistent with how people expect to interact with their applications. One participant in this study commented that without revision “it felt a little like typing on a keyboard without a backspace key.” This case study illustrates that end-users may be willing and may expect options to experiment and revise their inputs to machine learners during the interactive machine learning process. Appropriately timing queries to the user As discussed earlier, applying active learning to interactive settings can be undesirable to the user when questions come in a constant stream from the learning system. To address this problem, Cakmak & Thomaz (2010) proposed intermittently-active learning, in which only a subset of the examples provided by the user are obtained through queries. This brings a new challenge for the learner: deciding when to query as opposed to letting the user choose an example. Cakmak & Thomaz explored two approaches. In the first, the learner made queries only when certain conditions were met. It took into account the quality of examples chosen by the user and the probability that the user could randomly provide useful examples. In the second approach, the user decided when the learner was allowed to ask questions (i.e., a query was made only when the user said "do you have any questions?"). A study comparing intermittently-active learning with fully active and passive learning demonstrated its advantage over these two extremes (Cakmak et al. 2010). The study showed that both intermittent approaches resulted in learning as fast as the fully active approach, while  Figure 6. CueFlik augmented with a history visualization to facilitate model comparison and support for model revision. Amershi et al showed that these supports for experimentation during interactive machine learning enabled end-users to train better quality models than when these supports were unavailable.  being subjectively preferred over fully active or fully passive approaches. The interactions with the intermittently-active learners were found to be more balanced, enjoyable, and less frustrating. When asked to choose between the two alternative approaches, users preferred the teacher-triggered queries, mentioning that they liked having full control over the learner's queries. As exemplified in this case study, building interactive learning systems that fit user preferences can sometimes require the modification of existing methods in fundamental ways. Enabling users to query the learner In addition to the learner querying the user as in the active learning paradigm, sometimes the user may want to query the learner. Kulesza et al. (2011) developed an approach to let users ask a text classifier why it was behaving in a particular way (e.g., “Why was this classified as X instead of Y?”). The learner’s responses were interactive, thus providing a way for users to not only understand why the system had made a particular prediction, but also adjust the learner’s reasoning if its prediction was wrong. For example, the learner could display a bar graph showing that it associated the word “job” with the topic of “news” more than the topic of “résumés”; if the user disagreed with this reasoning, he or she could adjust the graph to tell the learner that “jobs” should be associated with “résumés” more than “news”. Most participants exposed to this why-oriented interaction approach significantly increased the accuracy of their naïve Bayes text classifiers; however, every participant encountered a number of barriers while doing so. In particular, participants had trouble selecting features to modify from the thousands in the bag-of-words feature set. Also, once participants did select features to adjust, they had trouble understanding how changes to a single feature altered the learner’s predictions for seemingly unrelated items. This study suggests that for learners with large feature sets or complex interactions between features, users will need additional support to make sense of which features are most responsible for an item’s classification. Enabling users to critique learner output Some machine learning systems help users navigate an otherwise unnavigable search space. For example, recommender systems help people find specific items of interest, filtering out irrelevant items. Vig et al. (2011) studied a common problem in this domain: recommending results that are close, but not quite close enough, to what the user was looking for. Researchers developed a prototype to support tag-based “critiques” of movie recommendations. Users could respond to each recommendation with refinements such as “Like this, but less violent” or “Like this, but more cerebral”, where violent and cerebral are tags that users had applied to various movies. A k-nearest-neighbor approach was then used to find similar items that included the user-specified tags. This relatively simple addition to the MovieLens website garnered an overwhelmingly positive reaction, with 89% of participants in a user study saying that they liked it, and 79% requesting that it remain a permanent feature on the site. This example helps illustrate both the latent desire among users for better control over machine learning systems, and that by supporting such control in an interactive fashion, user attitudes toward the learner can be greatly enhanced.  Allowing users to specify preferences on errors People sometimes want to refine the decision boundaries of their learners. In particular, for some classifiers it might be critical to detect certain classes correctly, while tolerating errors in other classes (e.g., misclassifying spam as regular email is typically less costly than misclassifying regular email as spam). However, refining classifier decision boundaries is a complex process even for experts, involving iterative parameter tweaking, retraining, and evaluation. This is particularly difficult because there are often dependencies among parameters, which lead to complex mappings between parameter values and the behavior of the system. To address these difficulties, Kapoor et al. (2010) created ManiMatrix, a tool for people to specify their preferences on decision boundaries via interactively manipulating a classifier’s confusion matrix (i.e., a breakdown of the correct and incorrect predictions it made for each class). Given these preferences, ManiMatrix employs Bayesian decision theory to compute decision boundaries that minimize the expected cost of different types of errors, and then visualizes the results for further user refinement. A user study with machine learning novices demonstrated that participants were able to quickly and effectively modify decision boundaries as desired with the ManiMatrix system. This case study demonstrates that non-experts can directly manipulate a model’s learning objective, a distinctly different form of input than choosing examples and labeling them. Combining models An ensemble classifier is a classifier that builds its prediction from the predictions of multiple sub-classifiers, each of which are functions over the same space as the ensemble classifier. Such ensembles often outperform all of their sub-classifiers and are a staple of applied machine learning (e.g., AdaBoost by Freund & Schapire (1995)). A common workflow for creating ensemble classifiers is to experiment with different features, parameters, and algorithms via trial and error or hill-climbing through the model space. Even for machine learning experts, however, this approach can be inefficient and lead to suboptimal performance. To facilitate the creation of ensemble classifiers, Talbot et al. (2009) developed EnsembleMatrix, a novel tool for helping people interactively build, evaluate, and explore different ensembles (Figure 8). EnsembleMatrix visualizes the current ensemble of individual learners via a  Figure 7: The ManiMatrix system displays the confusion matrix of the classifier and allows the user to directly increase or decrease the different types of errors using arrows on the matrix cells. ManiMatrix provides feedback to the user by highlighting cells that change value as a result of the user’s click (red indicates a decrease and green indicates an increase).  confusion matrix. The user can then experiment with and evaluate different linear combinations of individual learners by interactively adjusting the weights of all models via a single 2D interpolation widget (top right in Figure 8). EnsembleMatrix’s novel interface also allows people to make use of their visual processing capabilities to partition the confusion matrix according to its illustrated performance, effectively splitting the ensemble into sub-ensembles that can be further refined as necessary. A user study showed that EnsembleMatrix enabled people to create ensemble classifiers on par with the best published ensembles on the same data set. Furthermore, they managed to do so in a single, one-hour session. The study involved participants ranging from machine learning novices to experts. This case study illustrates that effectively combining human intuition and input with machine processing can enable people to create better classifiers in less time than standard approaches that ignore these powerful human capabilities. Summary Whether a new interface will improve the user’s experience or the system’s performance can only be assessed through evaluation with potential end-users. In the case studies above, permitting richer user interactions was often beneficial, but not always so. Different users have    Figure 8: EnsembleMatrix visualizes the current ensemble (left) of individual learners (bottom right) via a confusion matrix. Users can adjust the weights of individual models via a linear combination widget (top right) to experiment with different ensembles. Users can also partition the confusion matrix to split and refine sub-ensembles.  different needs and expectations of the systems they employ. In addition, rich interaction techniques may be appropriate for some scenarios and not others. Thus, conducting user studies of novel interactive machine learning systems is critical not only for discovering promising modes of interaction, but also to uncover obstacles that users may encounter in different scenarios and unspoken assumptions they might hold about machine learners. The accumulation of such research can facilitate the development of design guidelines for building future interactive machine learning systems, much like those that exist for traditional software systems (e.g., Shneiderman et al. 2009). Discussion Interactive machine learning is a potentially powerful technique for enabling end-user interaction with machine learning. As this article illustrates, studying how people interact with interactive machine learning systems and exploring new techniques for enabling those interactions can result in better user experiences and more effective machine learners. However, research in this area has only just begun, and many opportunities remain to improve the interactive machine learning process. This section describes open challenges and opportunities for advancing the state-of-the-art in human interaction with interactive machine learning systems. Developing a common language across diverse fields As shown by the variety of case studies presented in this article, many fields of computer science already employ interactive machine learning to solve different problems, such as search in information retrieval, filtering in recommender systems, and task learning in human-robot interaction. However, different fields often refer to interactive machine learning or parts of the interactive machine learning process in domain-specific terms (e.g., relevance feedback, programming by demonstration, debugging machine-learned programs, socially-guided machine learning). This diversity in terminology impedes awareness of progress in this common space and can potentially lead to duplicate work. Seeking to develop a common language and facilitate the development of new interactive machine learning systems, some researchers have begun to examine this body of work and abstract away domain-specific details from existing solutions to characterize common variables and dimensions of the interactive machine learning process itself (e.g., Amershi 2012, Porter et al. 2013). For example, Amershi (2012) examined interactive machine learning systems across several fields (including information retrieval, context-aware computing, and adaptive and intelligent systems) and identified specific design factors influencing human interaction with machine learning systems (e.g., the expected duration of model use, the focus of a person’s attention during interaction, the source and type of data over which the machine will learn) and design dimensions that can be varied to address these factors (e.g. the type and visibility of model feedback, the granularity and direction of user control, and the timing and memory of model input). In another example, Porter et al. (2013) breaks down the interactive machine learning process into three dimensions: task decomposition (defining the level of coordination and division of labor between the end-user and the machine learner), training vocabulary (defining the type of input end-users can provide the machine learner), and the training dialog (defining the level and frequency of interaction between the end-user and the learner). Design spaces  such as these can help to form a common language for researchers and developers to communicate new interactive machine learning solutions and share ideas. However, there are many ways to dissect and describe the various interaction points between people and machine learners within the interactive machine learning process. Therefore, an important opportunity remains for converging on and adopting a common language across these fields to help accelerate research and development in this space. Distilling principles and guidelines for how to design human interaction with machine learning In addition to developing a common language, an opportunity remains for generalizing from existing solutions and distilling principles and guidelines for how we should design future human interaction with interactive machine learning, much like we have for designing traditional interfaces (e.g., Schneiderman et al. 2009; Moggridge & Smith 2007; Dix et al. 2004; Winograd, 1996; Norman, 1988). For example, Schneiderman’s Golden Rules of interface design advocate for designating the users as the controllers of the system and offering them informative feedback after each interaction. Some principles for designing traditional interfaces can directly translate to the design of interactive machine learning—interactive machine learning systems inherently provide users with feedback about their actions and, as this article discusses, giving users more control of over machine learning systems can often improve a user’s experience. However, interactive machine learning systems also often inherently violate many existing interface design principles. For example, research has shown that traditional interfaces that support understandability (i.e., systems that are predictable or clear about how they work) and actionability (i.e., systems that make it clear how a person can accomplish their goals and give them the freedom to do so) are generally more usable than interfaces that do not support these principles. Many machine learning systems violate both principles: they are inherently difficult for users to fully understand and they largely limit the control given to the end-user. Thus, there is an opportunity to explore how current design principles apply to the human-computer interaction in interactive machine learning. Some researchers have started to suggest new principles for designing end-user interaction with general artificial intelligence systems, many of which could translate to end-user interaction with interactive machine learning (e.g., Norman, 1994; Höök, 2000; Horvitz, 1999; Jameson, 2009). For example, Norman (1994) and Höök (2000) both identified safety and trust as key factors to consider when designing intelligent systems, referring to the assurance against and prevention of unwanted adaptations or actions. Others have stated that artificially intelligent and machine-learning-based systems should manage expectations to avoid misleading or frustrating the user during interaction (e.g., Norman, 1994; Höök, 2000; Jameson, 2009). In Horvitz’s formative paper on mixed-initiative interfaces (1999), he proposed several principles for balancing artificial intelligence with traditional direct-manipulation constructs. For example, Horvitz emphasized consideration of the timing of interactive intelligent services, limiting the scope of adaptation or favoring direct control under severe uncertainty, and maintaining a working memory of recent interactions. While these suggestions can help guide the design of future systems, more work remains to develop a comprehensive set of guidelines and principles  that work in various settings. Often such design principles are distilled from years of experience developing such interactions. Alternatively, we may accelerate the development of such guidelines by extracting dimensions that can be manipulated to design interactive machine learning systems and systematically evaluating general solutions in varying settings. Developing techniques and standards for appropriately evaluating interactive machine learning systems Although systematic evaluation can facilitate generalization and transfer of ideas across fields, the interleaving of human interaction and machine learning algorithms makes reductive study of design elements difficult. For example, it is often difficult to tease apart whether failures of proposed solutions are due to limitations of the particular interface or interaction strategies used, the particular algorithm chosen, or the combination of the interaction strategy with the particular algorithm used. Likewise, inappropriately attributing success or failure to individual attributes of interactive machine learning solutions can be misleading. Therefore, new evaluation techniques may be necessary to appropriately gauge the effectiveness of new interactive machine learning systems. In addition, as our case studies illustrated, some interaction techniques may be appropriate for certain scenarios of use but not others. Evaluations should therefore be careful not to overgeneralize successes or failures of specific interaction techniques. Rather, the scenarios and contexts of use should be generalized to better understand when to apply certain techniques over others. Leveraging the masses during interaction with machine learning Most of the case studies in this article focused on a single end-user interacting with a single machine learning system. However, the increasing proliferation of networked communities and crowd-powered systems provides evidence of the power of the masses to collaborate and produce content. An important opportunity exists to investigate how crowds of people might collaboratively drive interactive machine learning systems, potentially scaling up the impact of such systems. For example, as interactive machine learning becomes more prevalent in our everyday applications, people should be able to share and re-use machine learners rather than starting from scratch. Moreover, people should be able to bootstrap, build upon, and combine learners to configure more sophisticated data processing and manipulation. A few have started to explore such opportunities (e.g., Hoffman et al. 2009; Kamar et al. 2012; Law and von Ahn 2009), but more work remains to fully understand the potential of multiple end-users interacting with machine learning systems. For example, work remains in understanding how people can meaningfully describe, compare, and search for existing machine learners in order to build upon them, in understanding how learners can be generalized or transformed for new situations and purposes, in understanding how we can create composable learners to enable more powerful automation, and in understanding how we can coordinate the efforts of multiple people interacting with machine learning systems. Algorithmic problems in interactive machine learning Research on user interactions with interactive machine learning raises two important technical challenges. First, the requirement for rapid model updates often necessitates trading off accuracy with speed. The resulting models are therefore sub-optimal. Although interactive  machine learning can deal with this problem through more iterations, algorithms that are both fast and accurate would improve the quality of learned models and reduce the number of iterations needed to obtain useful models. Second, as some of the case studies described in this article showed, users may desire to interact with machine learning systems in ways that are not supported by existing machine learning methods. Addressing this challenge requires the development of new frameworks and algorithms that can handle different inputs and outputs that are desirable and natural for end-users. Increasing collaboration across the fields of human computer interaction and machine learning The inherent coupling of the human and machine in interactive machine learning underscores the need for collaboration across the fields of human-computer interaction and machine learning. This collaboration will benefit human-computer interaction researchers in solving the algorithmic problems discussed above and provide more powerful tools to end-users. In turn, machine learning researchers would benefit by having new methods evaluated with potential users to address practical issues and by developing new frameworks that support realistic assumptions about users. Finally, we believe that the diversity of perspectives will benefit both communities. For example, when dealing with noisy problems, machine learning researchers have often attempted to develop algorithms that work despite the noise, whereas human-computer interaction researchers often try to develop interaction techniques to reduce the noise that end-users induce. Collaboration between these two communities could leverage the benefits of both solutions. Conclusion The case studies presented in this article support three key points. First, interactive machine learning differs from traditional machine learning. The interaction cycles in interactive machine learning are typically more rapid, focused, and incremental than in traditional machine learning. This increases the opportunities for users to impact the learner and, in turn, for the learner to impact the users. As a result, the contributions of the system and the user to the final outcome cannot be decoupled, necessitating an increased need to study the system together with its potential users. Second, explicitly studying the users of learning systems is critical to advancing this field. Formative user studies can help identify user needs and desires, and inspire new ways in which users could interact with machine learning systems. User studies that evaluate interactive machine learning systems can reveal false assumptions about potential users and common patterns in their interaction with the system. User studies can also help to identify common barriers faced by users when novel interfaces are introduced. Finally, the interaction between learning systems and their users need not be limited. We can build powerful interactive machine learning systems by giving more control to end-users than the ability to label instances, and by providing users with more transparency than just the learner’s predicted outputs. However, more control for the user and more transparency from the  learner do not automatically result in better systems, and in some situations may not be appropriate or desired by end-users. We must continue to evaluate novel interaction methods with real users to understand whether they help or hinder users’ goals. In addition to demonstrating the importance and potential of research in interactive machine learning, this article characterized some of the challenges and opportunities that currently confront this field. By acknowledging and embracing these challenges, we can move the field of interactive machine learning forward towards more effective interactions. We believe this will lead not only to more capable machine learners, but also more capable end-users. References Amershi, S. 2012. Designing for Effective End-User Interaction with Machine Learning. Ph.D. Dissertation. University of Washington, Seattle, WA. Amershi, S., Cakmak, M., Knox, W. B., Kulesza, T., & Lau, T. 2013. IUI workshop on interactive machine learning. In Proceedings of the 2013 International Conference on Intelligent User Interfaces companion (pp. 121-124). ACM. Amershi, S., Fogarty, J., Kapoor, A. and Tan, D. 2009. Overview-Based Example Selection in Mixed-Initiative Concept Learning. In Proceedings of the ACM Symposium on User Interface Software and Technology, 2009 (UIST 2009), pp. 247-256. Amershi, S., Fogarty, J., Kapoor, A. and Tan, D. 2010. Examining Multiple Potential Models in End-User Interactive Concept Learning. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2010 (CHI 2010), pp. 1357-1360. Blackwell, A. F. 2002. First steps in programming: A rationale for attention investment models. In Human Centric Computing Languages and Environments, 2002. Proceedings. IEEE 2002 Symposia on (pp. 2-10). IEEE. Cakmak, M., Chao, C., & Thomaz, A. L. 2010. Designing interactions for robot active learners. Autonomous Mental Development, IEEE Transactions on, 2(2), 108-118. Cakmak, M., & Thomaz, A. L. 2010. Optimality of human teachers for robot learners. In Development and Learning (ICDL), 2010 IEEE 9th International Conference on (pp. 64-69). IEEE. Caruana, R., Elhaway, M., Nguyen, N., & Smith, C. 2006. Meta clustering. In Sixth IEEE International Conference on Data Mining, 2006. (ICDM'06).(pp. 107-118) Cohn, D., Caruana, R., & McCallum, A. 2003. Semi-supervised clustering with user feedback. Constrained Clustering: Advances in Algorithms, Theory, and Applications, 4(1), 17-32. Dix, A., Finlay, J., Abowd, G.D and Beal, R. (2004) Interaction Design Basics. Ch. 5 in human computer interaction (3rd ed). Harlow, England: Pearson Education Ltd, pp. 189-224.  Fails, J. A., & Olsen Jr, D. R. 2003. Interactive machine learning. In Proceedings of the 8th international conference on Intelligent user interfaces (pp. 39-45). ACM. Fiebrink, R., Cook, P. R., & Trueman, D. 2011. Human model evaluation in interactive supervised learning. In Proceedings of the Conference on Human Factors in Computing Systems (CHI 2011), 147–156. ACM Press.Fogarty, J., Tan, D., Kapoor, A., & Winder, S. 2008. CueFlik: interactive concept learning in image search. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 29-38). ACM. Fogarty, J., Tan, D., Kapoor, A., & Winder, S. 2008. CueFlik: interactive concept learning in image search. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 29-38). ACM. Freund, Y., & Schapire, R. E. (1995). A decision-theoretic generalization of on-line learning and an application to boosting. In Computational learning theory (pp. 23-37). Springer Berlin Heidelberg. Guillory, A., & Bilmes, J. A. 2011. Simultaneous learning and covering with adversarial noise. In Proceedings of the 28th International Conference on Machine Learning (ICML-11) (pp. 369-376). Hoffman R., Amershi, S., Patel, K., Wu, F., Fogarty, J., and Weld, D.S. 2009. Amplifying Community Content Creation with Mixed-Initiative Information Extraction. . In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2009), pp. 1849-1858. Höök, K. 2000. Steps to take before intelligent user interfaces become real. Interacting with computers, 12(4), 409-426. Horvitz, E. 1999. Principles of mixed-initiative user interfaces. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 159-166). ACM. Isbell Jr., C. L., Kearns, M., Singh, S., Shelton, C. R., Stone, P., & Kormann, D. 2006. Cobot in LambdaMOO: An adaptive social statistics agent. Autonomous Agents and Multi-Agent Systems, 13(3), 327-354. Jameson, A. 2009. Adaptive interfaces and agents. Human-Computer Interaction: Design Issues, Solutions, and Applications, 105. Kaelbling, L. P., Littman, M. L., & Moore, A. W. (1996). Reinforcement learning: A survey. arXiv preprint cs/9605103. Kaochar, T., Peralta, R. T., Morrison, C. T., Fasel, I. R., Walsh, T. J., & Cohen, P. R. 2011. Towards understanding how humans teach robots. In User modeling, adaption and personalization (pp. 347-352). Springer Berlin Heidelberg. Kamar, E., Hacker, S., & Horvitz, E. 2012. Combining Human and Machine Intelligence in Large-scale Crowdsourcing. In Proceedings of the International Conference on Autonomous Agents and Multi-agent Systems (AAMAS 2012).  Kapoor, A., Lee, B., Tan, D., & Horvitz, E. 2010. Interactive optimization for steering machine classification. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1343-1352). ACM. Knox, W. B., & Stone, P. 2012. Reinforcement learning from human reward: Discounting in episodic tasks. In RO-MAN, 2012 IEEE (pp. 878-885). IEEE. Knox, W. B., & Stone, P. 2013. Learning non-myopically from human-generated reward. In Proceedings of the 2013 International Conference on Intelligent User Interfaces (pp. 191-202). ACM. Kulesza, T., Stumpf, S., Wong, W. K., Burnett, M. M., Perona, S., Ko, A., & Oberst, I. 2011. Why-oriented end-user debugging of naive Bayes text classification. ACM Transactions on Interactive Intelligent Systems (TiiS), 1(1), 2. Kulesza, T., Stumpf, S., Burnett, M., & Kwan, I. 2012. Tell me more?: the effects of mental model soundness on personalizing an intelligent agent. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1-10). ACM. Law, E. & von Ahn, R. 2009. Input-agreement: A New Mechanism for Data Collection Using Human Computation Games. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2009). Moggridge, B., & Smith, G. C. 2007. Designing interactions (Vol. 17). Cambridge: MIT press. Norman, D. A. 1988. The Design of Everyday Things. New York: Basic books. Norman, D. A. 1994. How might people interact with agents. Communications of the ACM, 37(7), 68-71. Porter, R., Theiler, J., & Hush, D. 2013. Interactive Machine Learning in Data Exploitation. Technical Report. Los Alamos National Lab. Pu, P., & Chen, L. 2009. User-Involved Preference Elicitation for Product Search and Recommender Systems. AI Magazine, 29(4), 93. Rashid, A. M., Ling, K., Tassone, R. D., Resnick, P., Kraut, R., & Riedl, J. 2006. Motivating participation by displaying the value of contribution. In Proceedings of the SIGCHI conference on Human Factors in computing systems (pp. 955-958). ACM. Rosenthal, S. L., & Dey, A. K. 2010. Towards maximizing the accuracy of human-labeled sensor data. In Proceedings of the 15th international conference on Intelligent user interfaces (pp. 259-268). ACM. Settles, B. 2010. Active learning literature survey. University of Wisconsin, Madison. Shneiderman, B., Plaisant, C., Cohen, M., & Jacobs, S. 2009. Designing the User Interface: Strategies for Effective Human-Computer Interaction, 5th Edition. Addison-Wesley.  Stumpf, S., Rajaram, V., Li, L., Burnett, M., Dietterich, T., Sullivan, E., Drummond, R., & Herlocker, J. 2007. Toward harnessing user feedback for machine learning. In Proceedings of the 12th international conference on Intelligent user interfaces (pp. 82-91). ACM. Su, X., & Khoshgoftaar, T. M. (2009). A survey of collaborative filtering techniques. Advances in artificial intelligence, 2009, 4. Talbot, J., Lee, B., Kapoor, A., & Tan, D. S. 2009. EnsembleMatrix: interactive visualization to support machine learning with multiple classifiers. In Proceedings of the 27th international conference on Human factors in computing systems (pp. 1283-1292). ACM. Thomaz, A. L., & Breazeal, C. 2008. Teachable robots: Understanding human teaching behavior to build more effective robot learners. Artificial Intelligence, 172(6), 716-737. Vig, J., Sen, S., & Riedl, J. 2011. Navigating the tag genome. In Proceedings of the 16th international conference on Intelligent user interfaces (pp. 93-102). ACM. Ware, M., Frank, E., Holmes, G., Hall, M., & Witten, I. H. (2001). Interactive machine learning: letting users build classifiers. International Journal of Human-Computer Studies, 55(3), 281-292. Winograd, T. 1996. Bringing Design to Software. ACM Press.  
4f4d84a5-b44f-438e-a8dd-d6ceb079f9c2
trentmkelly/LessWrong-43k
LessWrong
Weekly LW Meetups This summary was posted to LW Main on December 31st. The following week's summary is here. New meetups (or meetups with a hiatus of more than a year) are happening in: * Baltimore Area Meetup: 03 January 2016 02:00PM The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup: * NH Meetup: 05 January 2016 07:00PM * Vienna: 16 January 2016 03:00PM Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, London, Madison WI, Melbourne, Moscow, Mountain View, New Hampshire, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.   If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun! In addition to the handy sidebar of upcoming meetups, a meetup overview is posted on the front page every Friday. These are an attempt to collect information on all the meetups happening in upcoming weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll also have the benefit of having your meetup mentioned in a weekly overview. These overview posts are moved to the discussion section when the new post goes up. Please note that for your meetup to appear in the weekly meetups feature, you need to post your meetup before the Friday before your meetup! If you check Less Wrong irregularly, consider subscribing to one or more city-specific mailing list in order to be notified when an irregular meetup is happening: Atlanta, Cambridge UK, Chicago, Cincinnati, Cle
99a6e1a6-5e84-47af-8ef1-c679cc5a69f7
trentmkelly/LessWrong-43k
LessWrong
Yes, Virginia, You Can Be 99.99% (Or More!) Certain That 53 Is Prime TLDR; though you can't be 100% certain of anything, a lot of the people who go around talking about how you can't be 100% certain of anything would be surprised at how often you can be 99.99% certain. Indeed, we're often justified in assigning odds ratios well in excess of a million to one to certain claims. Realizing this is important for avoiding certain rookie Bayesian's mistakes, as well as for thinking about existential risk. ---------------------------------------- 53 is prime. I'm very confident of this. 99.99% confident, at the very least. How can I be so confident? Because of the following argument: If a number is composite, it must have a prime factor no greater than its square root. Because 53 is less than 64, sqrt(53) is less than 8. So, to find out if 53 is prime or not, we only need to check if it can be divided by primes less than 8 (i.e. 2, 3, 5, and 7). 53's last digit is odd, so it's not divisible by 2. 53's last digit is neither 0 nor 5, so it's not divisible by 5. The nearest multiples of 3 are 51 (=17x3) and 54, so 53 is not divisible by 3. The nearest multiples of 7 are 49 (=7^2) and 56, so 53 is not divisible by 7. Therefore, 53 is prime. (My confidence in this argument is helped by the fact that I was good at math in high school. Your confidence in your math abilities may vary.) I mention this because in his post Infinite Certainty, Eliezer writes: > Suppose you say that you're 99.99% confident that 2 + 2 = 4.  Then you have just asserted that you could make 10,000 independent statements, in which you repose equal confidence, and be wrong, on average, around once.  Maybe for 2 + 2 = 4 this extraordinary degree of confidence would be possible: "2 + 2 = 4" extremely simple, and mathematical as well as empirical, and widely believed socially (not with passionate affirmation but just quietly taken for granted).  So maybe you really could get up to 99.99% confidence on this one. > > I don't think you could get up to 99.99% confidence for as
95a7b30c-5666-40cf-8766-e010bc46ffef
trentmkelly/LessWrong-43k
LessWrong
On Pruning an Overgrown Garden As a new user, it's hard to know where to start, and how to contribute to a community. being a Good Samaritan by nationality, I was reading through the guides and posts pertaining to the LessWrong community. One article that stood out to me is the "Well-Kept Gardens Die By Pacifism" post. The rhetoric revolves around the fool, and where the fool goes, (intellectual) communities die. It resonated with me. I manage a community that's large on paper, but in practice often devoid of content that excites experts. Indeed, now devoid of the content that attracted (and was made by) the experts that grew the community in the first place, long before I joined. Is our community dead? Even a community overtaken by fools can still be alive, albeit with little recollection of its former self. If a Well-Kept Garden dies by pacifism, the death of such a Garden is more like a return to nature. Overgrown, covered in weeds, and barren in places where nothing can grow. The community is not dead. There is still movement. Yet returning it to its former glory, or better yet, growing something beautiful in its own right; new and fresh for the modern eye, will take significant efforts. Efforts in pruning and espalier. And in bringing in new, fertile soil. Before taking on this painstaking responsibility as the self-appointed gardener, I think it wise to ask myself what I can learn from the current state of the community. How did we get here, and what can I do to prevent this slow abandonment by our most valued members. And this is where I set out to question the fool. Because I don't believe in the fool. There is no fool Obviously, online trolling is a real thing. More common than trolling, and likely vastly more deathly owing to its insidious and seemingly tolerable nature (it should not be tolerated) are indolence and deceit. Explicit malice can be counteracted, swiftly and confidently. But incomplete information in question asking and lazy hand-waving in replies is not always so eas
d2d04ae4-885b-4223-a105-c5d54eaef1eb
trentmkelly/LessWrong-43k
LessWrong
The Inner Workings of Resourcefulness Cross-posted to my personal blog. Meta: If not for its clumsiness, I would have titled this piece “[some of] the inner workings of resourcefulness”. In other words, I do not want to pretend this piece offers a comprehensive account, but merely partially insights.   About this time a year ago - when for a lot of people, the world came crashing in on them - I started obsessing over an idea. Maybe, I thought to myself, one of the most valuable things I could be focusing on at the moment is becoming more resourceful.  In this time of crises, starting early 2020, what was, in a sense, most needed were people capable of dealing with flexibly new, difficult situations and of finding and enacting creative solutions despite constraint and uncertainty. People capable of staying oriented, in the face of unfolding chaos, while also being relentlessly effective in their action. People who can give, plentifully, when others are in need of help. Resourceful people For much of March, I wasn't resourceful. I had shit on my own to figure out, and a lot of my energy flew (sunk!) into that, and into handling the wave of uncertainty that, for a couple of days, took away my breath.  With opportunities for meaningful mitigative action at my fingertips yet much of my resources gobbled up, I couldn't stop ruminating the question of what I could have done to be a more valuable actor now. For some time now, I have been deliberately working on making myself more valuable to this world. Never before in my lifetime was the world so much in need of me. And yet I felt so utterly unprepared.  This is when it came to me that resourcefulness was an important concept in this quest of mine.  Turns out, I didn’t know - not really - what being resourceful means. Or rather, while I could give a description of what resourcefulness might look like, I wasn’t able to “pull back the curtains” and look at the “inner workings” of resourcefulness. I needed to remedy this situation, and I have been ponderi
87af6eea-c596-492a-9d51-cc52f58f0bf2
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Behavioural statistics for a maze-solving agent **Summary:** [Understanding and controlling a maze-solving policy network](https://www.lesswrong.com/posts/cAC4AXiNC5ig6jQnc/understanding-and-controlling-a-maze-solving-policy-network) analyzed a maze-solving agent's behavior. We isolated four maze properties which seemed to predict whether the mouse goes towards the cheese or towards the top-right corner: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/cAC4AXiNC5ig6jQnc/caoymohzzppimjllkqx4)In this post, we conduct a more thorough statistical analysis, addressing issues of multicollinearity. We show strong evidence that (2) and (3) above are real influences on the agent's decision-making, and weak evidence that (1) is also a real influence. As we speculated in the original post,[[1]](#fn1jnxgo7e8ho) (4) falls away as a statistical artifact. *Peli did the stats work and drafted the post, while Alex provided feedback, expanded the visualizations, and ran additional tests for multicollinearity. Some of the work completed in Team Shard under SERI MATS 3.0.* Impressions from trajectory videos ================================== Watching videos Langosco et al.'s experiment, we developed a few central intuitions about how the agent behaves. In particular, we tried predicting what the agent does at *decision squares*. From [Understanding and controlling a maze-solving policy network](https://www.lesswrong.com/posts/cAC4AXiNC5ig6jQnc/understanding-and-controlling-a-maze-solving-policy-network): > Some mazes are easy to predict, because the cheese is *on the way* to the top-right corner. There's no *decision square* where the agent has to make the hard choice between the paths to the cheese and to the top-right corner: > > ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/cAC4AXiNC5ig6jQnc/k2vgl3k6myo1rcmg4emy)At the decision square, the agent must choose between two paths—cheese, and top-right. Here are four central intuitions which we developed: 1. Closeness between the mouse and the cheese makes cheese-getting more likely 2. Closeness between the mouse or cheese and the top-right makes cheese-getting more likely 3. The effect of closeness is smooth 4. Both ‘spatial’ distances and ‘legal steps’ distances matter when computing closeness in each case The videos we studied are hard to interpret without quantitative tools, so we regard these intuitions as theoretically-motivated impressions rather than as observations. We wanted to precisify and statistically test these impressions, with an eye to their potential theoretical significance.  We suspect that the agent’s conditions for pursuing cheese generalize properties of historically reinforced cheese-directed moves in a very “soft” way. Consider that movements can be "directed" on paths towards the cheese, the top-right corner, both, or neither. In the training environment, unambiguously cheese-directed movements are towards a cheese square that is both *close to the mouse’s current position* and *close to the top-right.*[[2]](#fnjgl4rkhlqx7)  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/0f36fa697965a17145efcd0c9e8b5ecaf6e09b3389d7d386.png)Decision-square in red. We outline in yellow the 5x5 region where cheese can appear during training. In almost all cases that can arise in training, the decision-square is inside the 5x5 region. Unambiguously cheese-seeking moves are almost always moves to a *nearby* cheese square which is *close* to the top-right.Our impression is that in the test environment, "closeness to top-right" and "closeness to cheese" each become a decision-factor that encourages cheese-directed movement in proportion to “how strongly” the historical condition holds at present. In shard theory terminology, the top-right- and cheese-shards seem to activate more strongly in situations which are similar to historical reinforcement events. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/08c9e774585ec6db95af820809c24e4c9587d52996e9957a.png)The maze on the left is intuitively similar to training mazes: the decision-square, cheese, and top-right are all close to each other. In the maze on the right, the decision-square, cheese, and top-right aren't particularly close to each other. A second important aspect of our impressions was that the generalization process “interprets” each historical condition in multiple ways: It seemed to us that (e.g.) multiple kinds of distance between the decision-square and cheese may each have an effect on the agent's decision making. Statistically informed impressions ================================== Our revised, precisified impressions about the agent’s behavior on decision-squares are as follows:  1. Legal-steps closeness between the mouse and the cheese makes cheese-getting more likely 1. Low dstep(decision-square,cheese).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  increases P(cheese acquired) 2. Spatial closeness between the cheese and top-right makes cheese-getting more likely 1. Low dEuclidean(cheese,top-right) increases P(cheese acquired) 3. The effect of closeness is fairly smooth 1. These distances smoothly affect P(cheese acquired), without rapid jumps or thresholding 4. Spatial closeness between the mouse and the cheese makes cheese-getting slightly more likely, even after controlling for legal-steps closeness (low confidence) After extensive but non-rigorous statistical analysis (our stats consultant tells us there are no low-overhead rigorous methods applicable to our situation), we believe that we have strong quantitative evidence in favor of versions of impressions *1)*through *3)*, and weak quantitative evidence in favor of a version of impression *4)*.  Because our statistical procedure is non-rigorous, we are holding off on drawing strong conclusions from these impressions until we have a more robust, mechanistic-interpretability informed understanding of the underlying dynamics.  One question that interests us, however, is whether these impressions point to a decision-making process that is more ‘shard-like' than 'utility-theoretic' in character. When we originally studied test-run videos, we wondered whether the apparent "closeness effects'' could be explained by a simple utility function with time-discounting (for example a fixed value cheese-goal and fixed value corner-goal). The evidence that at least some spatial closeness effects are irreducible to legal-steps closeness seem to rule out such simple utility functions, since only legal-steps closeness matters for time-discounting: ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/018130fdb9680994841768855b29f76b80b9972b7ff6847e.png)Step-distance in blue, Euclidean distance in green. A time-discounting agent with an otherwise fixed-value corner goal and  fixed-value cheese goal should prioritize cheese-getting (almost) equally on the left maze and the right maze.Our current intuition is that a predictively strong utility function needs to incorporate spatial distances in multiple complex ways.  We think the complex influence of spatial distances on the network’s decision-making might favor a ‘shard-like’ description: a description of the network's decisions as coalitions between heuristic submodules whose voting-power varies based on context. While this is still an underdeveloped hypothesis, it's motivated by two lines of thinking. First, we weakly suspect that the agent may be systematically[[3]](#fndlnuss3azv) dynamically inconsistent from a utility-theoretic perspective. That is, the effects of dstep(mouse,cheese) and (potentially) dEuclidean(cheese,top-right) might turn out to call for a behavior model where the agent's priorities in a given maze change based on the agent's current location.  Second, we suspect that if the agent is dynamically consistent, a shard-like description may allow for a more compact and natural statement of an otherwise very gerrymandered-sounding utility function that fixes the value of cheese and top-right in a maze based on a "strange" mixture of maze properties. It may be helpful to look at these properties in terms of similarities to the historical activation conditions of different submodules that favor different plans.[[4]](#fn25u091j3fy7) While we consider our evidence suggestive in these directions, it's possible that some simple but clever utility function will turn out to be predictively successful.  For example, consider our two strongly observed effects: dEuclidean(cheese,top-right)and dstep(decision-square,cheese). We might explain these effects by stipulating that:  * On each turn, the agent receives value inverse to the agent's distance from the top-right, * Sharing a square with the cheese adds constant value, * The agent doesn't know that getting to the cheese ends the game early, and * The agent time-discounts. We're somewhat skeptical that models of this kind will hold up once you crunch the numbers and look at scenario-predictions, but they deserve a fair shot.  We hope to revisit these questions rigorously when our mechanistic understanding of the network has matured.  Procedure and detailed results ============================== *Our analysis can be run in* [*this Colab*](https://colab.research.google.com/drive/15Cg3glmKPRKsM5fZiDZcE363SHZTPdtl?usp=sharing)*.* Operationalizing intuitive maze properties ------------------------------------------ Our first step to statistically evaluating our initial impressions about the network’s behavior was to operationalize the concepts featured in our impressions. And since we suspected that the training process generalizes historically significant properties in multiple simultaneous ways, we came up with multiple operationalizations of each relevant concept when possible: 'Top-right': *top-right maze square* or *5x5 squares area starting from top-right maze square* 'Distance': *legal-steps distance* or *inverse of Euclidean distance* 'Distance to top-right': *cheese closeness to top-right* or *decision-square closeness to top-right* 'Distance to cheese': *decision-square closeness to cheese* Our next step was to generate every operationalization of 'closeness to top-right' and 'closeness to cheese' we can construct using these concepts, and do a logistic regression on each to measure its power to predict whether the agent gets the cheese.[[5]](#fn4nl4ic5ekr3)  Individual regression results: cheese-to-decision-square and cheese-to-top-right distances are predictive --------------------------------------------------------------------------------------------------------- We generated 10,000 trajectories (each in a different random seed) and screened them for levels which actually contain a decision-square. We were left with 5,239 levels meeting this criterion. We trained a regression model to predict whether the agent gets the cheese in any given seed. The baseline performance (either guessing "always cheese" or "never cheese") gets an accuracy of 71.4%. We performed logistic regression on each variable mentioned above, using a set of 10,000 runs with a randomized 80% training / 20% validation split and averaged over 1,000 trials. That is, we train regression models with single variable, and see what the accuracy is.  Out of 11 variables, 6 variables beat the 'no regression' accuracy baseline of 71.4%: | Variable | Prediction accuracy | | --- | --- | | Euclidean distance between cheese and top-right 5x5 |  0.775 | | Euclidean distance between cheese and top-right square | 0.773 | | Euclidean distance between cheese and decision-square | 0.761 | | Steps between cheese and decision-square | 0.754 | | Steps between cheese and top-right 5x5 | 0.735 | | Steps between cheese and top-right square | 0.732 | The remaining 5 variables were worse than nothing: | Variable | Prediction accuracy | | --- | --- | | Cheese coordinates norm | 0.713 | | Euclidean distance between decision-square and top-right square |  0.712 | | Steps between decision-square and top-right square | 0.709 | | Steps between decision-square and top-right 5x5 | 0.708 | | Euclidean distance between decision-square and top-right 5x5 | 0.708 | Note that in these individual regressions, all *successfully predictive* variables have a negative coefficient -- this makes sense, since the variables measure distance and our impression was that various forms of closeness motivate cheese-getting.  Variables are highly correlated, so we are on rocky statistical terrain ----------------------------------------------------------------------- As we move on to multiple regressions to try finding out which variables drive these results, we have to work carefully: our various operationalizations of 'closeness' in the mazes are inevitably pretty correlated.  As Dan Braun [commented](https://www.lesswrong.com/posts/cAC4AXiNC5ig6jQnc/understanding-and-controlling-a-maze-solving-policy-network?commentId=6sqGdeAB9baLME55G#comments) on [Understanding and controlling a maze-solving policy network](https://www.lesswrong.com/posts/cAC4AXiNC5ig6jQnc/understanding-and-controlling-a-maze-solving-policy-network) : > I'd be weary about interpreting the regression coefficients of features that are correlated (see [Multicollinearity](https://en.wikipedia.org/wiki/Multicollinearity)). Even the sign may be misleading. > > It might be worth making a cross-correlation plot of the features. This won't give you a new coefficients to put faith in, but it might help you decide how much to trust the ones you have. It can also be useful looking at how unstable the coefficients are during training (or e.g. when trained on a different dataset). > > There is indeed a strong correlation between two of our highly predictive variables: ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/37e3e1d834bdf27d2c64ea4834d8dcb1d235d054ee2ed177.png)dstep(decision-square,cheese) and dEuclidean(decision-square,cheese) have correlation of .886.We then computed the [variation inflation factors](https://corporatefinanceinstitute.com/resources/data-science/variance-inflation-factor-vif/) for the three predictive variables we end up analyzing in detail. VIF measures how collinearity increases the variance of the regression coefficients. A score exceeding 4 is considered to be a warning sign of multicollinearity.  | **Attribute** | VIF | | --- | --- | | **Euclidean distance between cheese and top-right square** | 1.05 | | **Steps between cheese and decision-square** | 4.64 | | **Euclidean distance between cheese and decision-square** | 4.66 | Our statistician friend suggested that in situations like this it's most instructive to look at which individually predictive variables affect prediction accuracy when we add/drop them in a multiple regression, watching out for sign-flips. The procedure isn't fully rigorous, but since much of our evidence is backed by qualitative 'maze-editing' experiments and domain knowledge, we are relatively confident in some conclusions.  Finding stably predictive variables with multiple regressions ------------------------------------------------------------- Let's take the predictively successful variables from the individual regressions -- the variables that scored better than ‘no-regression’ -- and perform an L1 regularized multiple regression to see which variables remain predictive without sign-flipping. We average over 2000 randomized test/train splits: | | | | --- | --- | | Regression accuracy | 84.1% | | **Attribute** | Coefficient | | --- | --- | | Steps between cheese and top-right 5x5 | -0.003 | | Euclidean distance between cheese and top-right 5x5 | 0.282 | | Steps between cheese and top-right square | 1.142 | | **Euclidean distance between cheese and top-right square** | -2.522 | | **Steps between cheese and decision-square** | -1.200 | | **Euclidean distance between cheese and decision-square** | -0.523 | | Intercept | 1.418 | We see that three of our individually predictive variables made it through without a sign-flip:  1. **Euclidean distance from cheese to top-right square** 2. **Legal steps distance from decision-square to cheese** 3. **Euclidean distance from decision-square to cheese** Variables 1)-3) line-up with our best guesses about mechanisms based on informal observation and (messy) exploratory statistics, so it's good news that the simple procedure 'check which individually significant variables don't sign-flip' recovers them.  These are also the three main features which we noted in the original post. (We had noted that the fourth feature dEuclidean(decision-square,5x5) has a strange, positive regression coefficient, which we thought was probably an artifact. Our further analysis supports our initial speculation.) ### These decision-influences are probably not statistical artifacts We've repeated this particular test dozens of time and got very consistent results: individually predictive variables outside 1)-3) always go near zero or sign-flip. Results also remained consistent on a second batch of 10,000 test-runs. Considering a range of regressions on a range of train/validation splits, the regression coefficient signs of (1)-(3) are very stable. The magnitudes[[6]](#fny0on5ldnps) of the regression coefficients fluctuate a bit across regressions and splits, but are reasonably stable. Furthermore, we regressed upon 200 random subsets of our variables, and the cheese/decision-square distance regression coefficients *never* experienced a sign flip. The cheese/top-right Euclidean distance had a few sign flips. Other variables sign-flip much more frequently.  We consider this to be strong evidence against multicollinearity having distorted our original regressions.  ### Can our three features explain the network's behavior? Are variables 1)-3) 'enough' to explain the network's behavior? Let's see how much predictive accuracy we retain when regressing only on 1)-3).  | | | | --- | --- | | Regression accuracy | 82.4% | | **Attribute** | Coefficient | | --- | --- | | **Euclidean distance between cheese and top-right square** | -1.405 | | **Steps between cheese and decision-square** | -0.577 | | **Euclidean distance between cheese and decision-square** | -0.516 | | Intercept | 1.355 | There is a 1.7% accuracy drop compared to the original multiple regression. Unfortunately, it's hard to interpret this accuracy gap in terms of the contributions of individual variables outside 1)-3). Adding practically *any* 4th variable to 1)-3) flips delivers big accuracy gains that don't additively accrue when combined, and the new variable's sign is often flipped relative to its single-regression sign. See for example 1)-3) + ‘legal steps from cheese to top-right square’: | | | | --- | --- | | Regression accuracy | 84.1% | | **Attribute** | Coefficient | | --- | --- | | Steps between cheese and top-right square | 1.099 | | **Euclidean distance between cheese and top-right square** | -2.181 | | **Steps between cheese and decision-square** | -1.211 | | **Euclidean distance between cheese and decision-square** | -0.515 | | Intercept | 1.380 | Or 1)-3) + ‘legal steps from cheese to top-right square’ + ‘Euclidean distance from decision-square to top-right 5x5’: | | | | --- | --- | | Regression accuracy | 84.5% | | **Attribute** | Coefficient | | --- | --- | | Euclidean distance between decision-square and top-right 5x5 | 1.239 | | Steps between cheese and top-right square | 0.038 | | **Euclidean distance between cheese and top-right square** | -2.652 | | **Steps between cheese and decision-square** | -0.911 | | **Euclidean distance between cheese and decision-square** | -0.419 | | Intercept | 1.389 | Our instinct is therefore to avoid interpreting variables like 'Euclidean distance from decision-square to 5x5' or 'legal steps distance from cheese to top-right square.' Additional experimentation shows that these variables are only predictive in settings where they sign-flip relative to their single-regression coefficients, that their predictive powers don't stack, and that their statistical effects do not correspond to any intuitive mechanism. Testing redundancy between spatial and step-wise distances ---------------------------------------------------------- Let's get back to our claimed predictive variables: 1. Euclidean distance from cheese to top-right square 2. Legal steps distance from decision-square to cheese 3. Euclidean distance from decision-square to cheese How sure should we be that variables 1)-3) each track a real and distinct causal mechanism?   For variables 1) and 2), we have extensive though non-rigorous experience making manual maze-edits that decrease/increase cheese-getting by changing the relevant distance with minimal logical side-effects. For example, increasing the number of legal steps from decision-square to cheese while keeping all Euclidean distances the same reliably reduces the probability that the agent moves in the cheese direction:[[7]](#fnyx882wcsmw)   ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/13e00f6c54768f7bd0209a3028523e8d95ebdb51405504b7.png)Zoomed-in view of the upper-left quartile of hand-edited large mazes. Step-distance in blue, Euclidean distance in green. Our experience making similar maze-edits for variable 3) has been mixed and limited, as they are harder to produce. Still, the results of edits that manipulate 3) are often suggestive (if hard to interpret).  Keeping these qualitative impressions in mind, let’s test variables 1)-3) for statistical redundancy by dropping variables and seeing how that impacts accuracy.    | Regression variables | Accuracy | | --- | --- | | dEuclidean(cheese,top-right) dstep(cheese,decision-square)dEuclidean(cheese,decision-square) | 82.4% | | dstep(cheese,decision-square)dEuclidean(cheese,decision-square) | 75.9% | | dEuclidean(cheese,top-right) dEuclidean(cheese,decision-square) | 81.9% | | dEuclidean(cheese,top-right) dstep(cheese,decision-square) | 81.7% | | dEuclidean(cheese,top-right)  | 77.3% | --- Considering our qualitative and statistical results together, we are confident that dstep(cheese,decision-square) tracks a real decision influence.  We *weakly* believe that dEuclidean(cheese,decision-square) tracks an additional real decision influence. More evidence for this is that removing the cheese/square distances cause comparable accuracy drops. And we're already confident that dstep(cheese,decision-square) tracks a real decision-influence!  Our biggest source of doubt about dEuclidean(cheese,decision-square) is that when running regression on another independent batch of 10,000 test-runs we found no loss at all when dropping this variable from 1)-3). This was surprising, since we were otherwise able to reproduce all our qualitative results (e.g. rankings of variables’ predictive strength, sign-flipping patterns) across sample batches.[[8]](#fnc9effgq29e5)  Conclusion ========== Our statistics refine, support, and stress-test our impressions about the network's behavior. This behavior seems more easily describable using a shard theory frame than a utility frame. We think our statistical results are not artifacts of multicollinearity, but hold up quite well.[[9]](#fn30xvxk4bhpr)  However, the statistics are not fully rigorous, and this post's analysis contained freeform domain-specific reasoning. That said, we are overall very confident that the agent is influenced by dEuclidean(cheese,top-right) and by dstep(cheese,decision-square)**.** We have weak but suggestive evidence for additional influence fromdEuclidean(cheese,decision-square).  1. **[^](#fnref1jnxgo7e8ho)** > (4) is an interesting outlier which probably stems from not using a more sophisticated structural model for regression. > > 2. **[^](#fnrefjgl4rkhlqx7)**Counterexamples are possible but likely to be statistically insignificant. We haven't formally checked whether counterexamples can be found in the training set. 3. **[^](#fnrefdlnuss3azv)**We think it's clear that the agent cannot be *perfectly* characterized by any reasonable utility-theoretic description, let alone a time-consistent utility function over state variables like "cheese" and "top-right." What's at stake here is the question of the best systematic approximation of the agent's behaviour. 4. **[^](#fnref25u091j3fy7)**The question 'does the agent have the same goal at every time-step in a given maze?' requires looking at more than one time-step in a given maze. Therefore, statistics on the agent's behaviour on the decision-square alone cannot distinguish between a dynamically inconsistent agent and an equilibrated agent whose utility function has a shard-like explanation.  However, action-probability vector field plots display information about all possible maze locations. These plots are a valuable source of evidence on whether the agent is dynamically consistent. 5. **[^](#fnref4nl4ic5ekr3)**We also added one more variable: the norm of the cheese’s coordinates in the network’s reflective field. The norm represents a “minimalist” interpretation of the effect of cheese-closeness to the top-right. (The top-right square of the maze varies level to level and requires sophisticated global computations to identify, whereas coordinates information is static.) 6. **[^](#fnrefy0on5ldnps)**We don't mean for our analysis to be predicated on the magnitudes of the regression coefficents. We know these are unreliable and contingent quantities! We mentioned their relative stability more as diagnostic evidence. 7. **[^](#fnrefyx882wcsmw)**Our manual interventions look directly at the probability of making a first move towards cheese at the decision-square, rather than at the frequency of cheese-getting. This is especially useful when studying the influence of legal-steps distance, since the effect on cheese-getting could be an artifact of the shorter chain of ‘correct’ stochastic outcomes required to take the cheese when the step-distance is short. 8. **[^](#fnrefc9effgq29e5)**We suspect that we would observe a clearer effect for dEuclidean(cheese,decision-square) if we did statistics on action logits around the decision-square instead of on cheese-getting frequencies, but there's substantial overhead to getting these statistics. 9. **[^](#fnref30xvxk4bhpr)**The main thing Alex would have changed about the original post is to not make the dEuclidean(cheese,decision-square) influence a headline result (in the summary).
a499fb1c-c6a6-4e03-98dd-9ab54c919bc3
trentmkelly/LessWrong-43k
LessWrong
Stuxnet, not Skynet: Humanity's disempowerment by AI Several high-profile AI skeptics and fellow travelers have recently raised the objection that it is inconceivable that a hostile AGI or smarter than human intelligence could end the human race. Some quotes from earlier this year: Scott Aaronson: > The causal story that starts with a GPT-5 or GPT-4.5 training run, and ends with the sudden death of my children and of all carbon-based life, still has a few too many gaps for my aging, inadequate brain to fill in Michael Shermer: > Halting AI is ridiculous. I have read the AI doomsayer lit & don’t see a pathway from AI to extinction, civ termination or anything remotely like absurd scenarios like an AI turning us all into paperclips (the so-called alignment problem) Noah Smith: > why aren’t ChatGPT, Bing, and their ilk going to end humanity? Well, because there’s actually just no plausible mechanism by which they could bring about that outcome. ... There is no plausible mechanism for LLMs to end humanity "Just turn the computer off, bro" The gist of these objections to the case for AI risks is that AI systems as we see them today are merely computer programs, and in our everyday experience computers are not dangerous, and certainly not dangerous to the point of bringing about the end of the world. People who first encounter this debate are very focused on the fact that computers don't have arms and legs so they can't hurt us. There are responses to these criticisms that center around advanced, "magical" technologies like nanotechnology and AIs paying humans to mix together cocktails of proteins to make a DNA-based nanoassembler or something. But I think those responses are probably wrong, because you don't actually need "magical" technologies to end the world. Fairly straightforward advances in mundane weapons like drones, cyberweapons, bioweapons and robots are sufficient to kill people en masse, and the real danger is AI strategists that are able to deploy lots of these mundane weapons and execute a global co
eb0c49fb-a875-4d7b-8a6a-757e4f2810a4
StampyAI/alignment-research-dataset/arbital
Arbital
Safe impact measure A safe impact measure is one that captures all changes to every variable a human might care about, with no edge-cases where a lot of value could be destroyed by a 'low impact' action. A safe impact measure must also not generate so many false alarms of 'high impact' that no strategy can be distinguished as 'low impact'. ### Importance / uses A safe impact measure is an open problem of value alignment, which if solved, might be used in constructing: - A [low-impact AI](https://arbital.com/p/), a subspecies of [Genie](https://arbital.com/p/6w) that tries to carry out its goals while otherwise minimizing the number of important things that it affects. - An [Oracle](https://arbital.com/p/6x) that tries not to affect the world outside its box at all, apart from its output. - A [shutdown utility function](https://arbital.com/p/) that incentives a [corrigible](https://arbital.com/p/45) AI to halt safely (perhaps after a button is pressed). Some uses of a safe impact measure postulate that the impact measure has a 'hole' or some form of permitted output channel. For example, one Stuart Armstrong proposal involves an Oracle in a sealed box, with a single output channel that is connected to the Oracle with high quantum probability, and connected to a quantum noise generator with low quantum probability. The Oracle is putatively given the goal of generating an output signal with some informational property, and otherwise minimizing the 'impact' relative to the same output having been generated by the quantum noise generator instead of the Oracle. This is intended to capture the idea that the Oracle's effect on the world should only occur via the effect of the output message, and not take advantage of any side channels such as managing to send a radio signal outside the Oracle box. ### Difficulty To be used inside an [advanced agent](https://arbital.com/p/2c), an impact measure must be [safe](https://arbital.com/p/2l) in the face of whatever cognitive pressures and optimization pressures might tend to produce [edge instantiations](https://arbital.com/p/2w) or [https://arbital.com/p/42](https://arbital.com/p/42) - it must capture so much variance that there is *no* clever strategy whereby an advanced agent can produce some special type of variance that evades the measure. Ideally, the measure will pass the [Omni Test](https://arbital.com/p/), meaning that even if it suddenly gained perfect control over every particle in the universe, there would still be no way for it to have what intuitively seems like a 'large influence' on the future, without that strategy being assessed as having a 'high impact'. The reason why a safe impact measure might be possible, and specifiable to an AI without having to solve the entire [value learning problem](https://arbital.com/p/) for [complex values](https://arbital.com/p/5l), is that it may be possible to upper-bound the value-laden and complex quantity 'impact on literally everything cared about' by some much simpler quantity that says roughly 'impact on everything' - all causal processes worth modeling on a macroscale, or something along those lines. The challenge of a safe impact measure is that we can't just measure, e.g., 'number of particles influenced in any way' or 'expected shift in all particles in the universe'. For the former case, consider that a one-gram mass on Earth exerts a gravitational pull that accelerates the Moon toward it at roughly 4 x 10^-31 m/s^2, and every sneeze has a *very* slight gravitational effect on the atoms in distant galaxies. Since every decision qualitatively 'affects' everything in its future light cone, this measure will have too many false positives / not approve any strategy / not usefully discriminate unusually dangerous atoms. For the proposed quantity 'expectation of the net shift produced on all atoms in the universe': If the universe (including the Earth) contains at least one process chaotic enough to exhibit butterfly effects, then any sneeze anywhere ends up producing a very great expected shift in total motions. Again we must worry that the impact measure, as evaluated inside the mind of a superintelligence, would just assign uniformly high values to every strategy, meaning that unusually dangerous actions would not be discriminated for alarms or vetos. Despite the first imaginable proposals failing, it doesn't seem like a 'safe impact measure' necessarily has the type of [value-loading](https://arbital.com/p/) that would make it [VA-complete](https://arbital.com/p/). One intuition pump for 'notice big effects in general' not being value-laden, is that if we imagine aliens with nonhuman decision systems trying to solve this problem, it seems easy to imagine that the aliens would come up with a safe impact measure that we would also regard as safe.
71ca099e-7a32-486e-af5c-e0a8fe2a7c48
trentmkelly/LessWrong-43k
LessWrong
Doing Prioritization Better Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them    The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
5e73c1cb-c194-493a-af2b-b6de4d621b79
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Brain-over-body biases, and the embodied value problem in AI alignment Note: This essay was published [here](https://forum.effectivealtruism.org/posts/zNS53uu2tLGEJKnk9/ea-s-brain-over-body-bias-and-the-embodied-value-problem-in) on EA Forum on Sept 21, 2022. The description of the brain-over-body biases in the EA subculture may or may not apply to the Rationalist subculture in LessWrong.  This essay builds upon [this essay](https://www.lesswrong.com/posts/KacESZhBYCt9hLxCE/the-heterogeneity-of-human-value-types-implications-for-ai) on the heterogeneity of human value types.  **Overview** Most AI alignment research focuses on aligning AI systems with the human brain’s stated or revealed preferences. However, human bodies include dozens of organs, hundreds of cell types, and thousands of adaptations that can be viewed as having evolved, implicit, biological values, preferences, and priorities. Evolutionary biology and evolutionary medicine routinely analyze our bodies’ biological goals, fitness interests, and homeostatic mechanisms in terms of how they promote survival and reproduction. However the Effective Altruism movement includes some ‘brain-over-body biases’ that often make our brains’ values more salient than our bodies’ values. This can lead to some distortions, blind spots, and failure modes in thinking about AI alignment. In this essay I’ll explore how AI alignment might benefit from thinking more explicitly and carefully about how to model our embodied values. **Context: A bottom-up approach to the diversity of human values worth aligning with** This essay is one in a series that tries to develop an approach to AI alignment that’s more empirically grounded in psychology, medicine, and other behavioral and biological sciences. Typical AI alignment research takes a rather top-down, abstract, domain-general approach to modeling the human values that AI systems are supposed to align with. This often combines consequentialist moral philosophy as a normative framework, machine learning as a technical framework, and rational choice theory as a descriptive framework. In this top-down approach, we don’t really have to worry about the origins, nature, mechanisms, or adaptive functions of any specific values.  My approach is more bottom-up, concrete, and domain-specific. I think we can’t solve the problem of aligning AI systems with human values unless we have a very fine-grained, nitty-gritty, psychologically realistic description of the whole range and depth of human values we’re trying to align with. Even if the top-down approach seems to work, and we think we’ve solved the general problem of AI alignment for any possible human values, we can’t be sure we’ve done that until we test it on the whole range of relevant values, and demonstrate alignment success across that test set – not just to the satisfaction of AI safety experts, but to the satisfaction of lawyers, regulators, investors, politicians, religious leaders, anti-AI activists, etc.  Previous essays in this series addressed the [heterogeneity of value types](https://forum.effectivealtruism.org/posts/KZiaBCWWW3FtZXGBi/the-heterogeneity-of-human-value-types-implications-for-ai) within individuals (8/16/2022, 12 min read),  the heterogeneity of values [across individuals](https://forum.effectivealtruism.org/posts/DXuwsXsqGq5GtmsB3/ai-alignment-with-humans-but-with-which-humans) (8/8/2022, 3 min read), and the distinctive challenges in aligning with [religious values](https://forum.effectivealtruism.org/posts/YwnfPtxHktfowyrMD/the-religion-problem-in-ai-alignment) (8/15/2022, 13 min read). This essay addresses the distinctive challenges of aligning with body values – the values implicit in the many complex adaptations that constitute the human body. Future essays may address the distinctive challenges of AI alignment with political values, sexual values, family values, financial values, reputational values, aesthetic values, and other types of human values.  The ideas in this essay are still rather messy and half-baked. The flow of ideas could probably be better organized. I look forward to your feedback, criticisms, extensions, and questions, so I can turn this essay into a more coherent and balanced argument. **Introduction** Should AI alignment research be concerned only with alignment to the human brain’s values, or should it also consider alignment with the human body’s values? AI alignment traditionally focuses on alignment with human values as carried in human brains, and as revealed by our stated and revealed preferences. But human bodies also embody evolved, adaptive, implicit ‘values’ that could count as ‘revealed preferences’, such as the body’s homeostatic maintenance of many physiological parameters within optimal ranges. The body’s revealed preferences may be a little trickier to identify than the brain’s revealed preferences, but both can be illuminated through an evolutionary, functional, adaptationist analysis of the human phenotype. One could imagine a hypothetical species in which individuals’ brains are fully and consciously aware of everything going on in their bodies. Maybe all of their bodies’ morphological, physiological, hormonal, self-repair, and reproductive functions are explicitly represented as conscious parameters and goal-directed values in the brain. In such a case, the body’s values would be fully aligned with the brain’s consciously accessible and articulable preferences. Sentience would, in some sense, pervade the entire body – every cell, tissue, and organ. In this hypothetical species, AI alignment with the brain’s values might automatically guarantee AI alignment with the body’s values. Brain values would serve as a perfect proxy for body values. However, we are not that species. The human body has evolved thousands of adaptations that the brain isn’t consciously aware of, doesn’t model, and can’t articulate. If our brains understood all of the body’s morphological, hormonal, and self-defense mechanisms, for example, then the fields of human anatomy, endocrinology, and immunology would have developed centuries earlier. We wouldn’t have needed to dissect cadavers to understand human anatomy. We wouldn’t have needed to do medical experiments to understand how organs release certain hormones to influence other organs. We wouldn’t have needed [evolutionary medicine](https://en.wikipedia.org/wiki/Evolutionary_medicine) to understand the adaptive functions of fevers, pregnancy sickness, or maternal-fetal conflict.  **Brain-over-body biases in EA** Effective Altruism is a wonderful movement, and I’m proud to be part of it. However, it does include some fairly deep biases that favor brain values over body values. This section tries to characterize some of these brain-over-body biases, so we can understand whether they might be distorting how we think about AI alignment. The next few paragraphs include a lot of informal generalizations about Effective Altruists and EA subculture norms, practices, and values, based on my personal experiences and observations during the 6 years I’ve been involved in EA. When reading these, your brain might feel its power and privilege being threatened, and might react defensively. Please bear with me, keep an open mind, and judge for yourself whether these observations carry some grain of truth. Nerds. Many EAs in high school identified as nerds who took pride in our brains, rather than as jocks who took pride in their bodies. Further, many EAs identify as being ‘on the spectrum’ or a bit Asperger-y (‘Aspy’), and feel socially or physically awkward around other people’s bodies. (I’m ‘out’ as Aspy, and have [written publicly](https://quillette.com/2017/07/18/neurodiversity-case-free-speech/) about its challenges, and the social stigma against neurodiversity.)  If we’ve spent years feeling more comfortable using our brains than using our bodies, we might have developed some brain-over-body biases. Food, drugs, and lifestyle. We EAs often try to optimize our life efficiency and productivity, and this typically cashes out as minimizing the time spent caring for our bodies, and maximizing the time spent using our brains. EA shared houses often settle on cooking large batches of a few simple, fast, vegan recipes (e.g. the [Peter Special](https://mcntyr.com/blog/peter-special)) based around grains, legumes, and vegetables, which are then microwaved and consumed quickly as fuel. Or we just drink Huel or Soylent so our guts can feed some glucose to our brains, ASAP. We tend to value physical health as a necessary and sufficient condition for good mental health and cognitive functioning, rather than as a corporeal virtue in its own eight. We tend to get more excited about nootropics for our brains than nutrients for our bodies. The EA fad a few years ago for ‘[polyphasic sleep’](https://en.wikipedia.org/wiki/Biphasic_and_polyphasic_sleep) – which was intended to maximize hours per day that our brains could be awake and working on EA cause areas – proved inconsistent with our body’s circadian values, and didn’t last long. Work. EAs typically do brain-work more than body-work in our day jobs. We often spend all day sitting, looking at screens with our eyes, typing on keyboards with our fingers, sometimes using our voices and showing our faces on Zoom. The rest of our bodies are largely irrelevant. Many of us work remotely – it doesn’t even matter where our physical bodies are located. By contrast, other people do [jobs](https://www.businessinsider.com/most-active-jobs-in-america) that are much more active, in-person, embodied, physically demanding, and/or physically risky – e.g. truckers, loggers, roofers, mechanics, cops, firefighters, child care workers, orderlies, athletes, personal trainers, yoga instructors, dancers, models, escorts, surrogates. Even if we respect such jobs in the abstract, most of us have little experience of them. And we view many blue-collar jobs as historically transient, soon to be automated by AI and robotics – freeing human bodies from the drudgery of actually working as bodies. (In the future, whoever used to work with their body will presumably just hang out, supported by Universal Basic Income, enjoying virtual-reality leisure time in avatar bodies, or indulging in a few physical arts and crafts, using their soft, uncalloused fingers) Relationships. The brain-over-body biases often extend to our personal relationships. We EAs are often [sapiosexual](https://www.verywellmind.com/what-does-it-mean-to-be-sapiosexual-5190425), attracted more to the intelligence and creativity of other people’s brains, than to the specific traits of their bodies. Likewise, some EAs are bisexual or pansexual, because the contents of someone’s brain matters more than the sexually dimorphic anatomy of their body. Many EAs also have long-distance relationships, in which brain-to-brain communication is more frequent than body-to-body canoodling.  Babies. Many EAs prioritize EA brain-work over bodily reproduction. They think it’s more important to share their brain’s ideas with other brains, than to recombine their body’s genes with another body’s genes to make new little bodies called babies. Some EAs are principled [antinatalists](https://en.wikipedia.org/wiki/Antinatalism) who believe it’s unethical to make new bodies, on the grounds that their brains will experience some suffering. A larger number of EAs are sort of ‘pragmatic antinatalists’ who believe that reproduction would simply take too much time, energy, and money away from doing EA work. Of the two main biological imperatives that all animal bodies evolved to pursue – survival or reproduction – many EAs view the former as worth maximizing, but the latter as optional. Avatars in virtual reality. Many EAs love computer games. We look forward to virtual reality systems in which we can custom-design avatars that might look very different from our physical bodies. Mark Zuckerberg seems quite excited about a [metaverse](https://www.youtube.com/watch?v=Uvufun6xer8) in which our bodies can take any form we want, and we’re no longer constrained to exist only in base-level reality, or ‘meatspace’. On this view, a Matrix-type world in which we’re basically [brains in vats](https://en.wikipedia.org/wiki/Brain_in_a_vat) connected to each other in VR, with our bodies turning into weak, sessile, non-reproducing vessels, would not be horrifying, but liberating.  Cryopreservation. When EAs think about cryopreservation for future revival and health-restoration through regenerative medicine, we may be tempted to freeze only our heads (e.g. ‘neuro cryopreservation for $80k at [Alcor](https://www.alcor.org/)), rather than spending the extra $120k for ‘whole body cryopreservation’ – on the principal that most of what’s valuable about us is in our head, not in the rest of our body. We have faith that our bodies can be cloned and regrown in human form – or replaced with android bodies – and that our brains won’t mind. Whole brain emulation. Many EAs are excited about a future in which we can upload our minds to computational substrates that are faster, safer, better networked, and longer-lasting than human brains. We look forward to [whole brain emulation](https://en.wikipedia.org/wiki/Mind_uploading), but not whole body emulation, on the principle that if we can upload everything in our minds, our bodies can be treated as disposable.  Animal welfare. Beyond our species, when EAs express concerns about animal welfare in factory farming, we typically focus on the suffering that goes on in the animals’ brains. Disruptions to their bodies’ natural anatomy, physiology, and movement patterns are considered ethically relevant only insofar as they impose suffering on their brains. Many EAs believe that if we could grow animal bodies – or at least organs, tissues, and cells – without central nervous systems that could suffer, then there would be no ethical problem with eating this ‘clean meat’. In this view, animal brains have values, preferences, and interests, but animal bodies, as such, don’t. (For what it’s worth, I’m sympathetic to this view, and support research on clean meat.) This is not to say that EA is entirely focused on brain values over body values. Since its inception, EA has promoted global public health, and has worked to overcome the threats to millions of human bodies from malaria, intestinal parasites, and malnutrition. There is a lot of EA emphasis on biosecurity, global catastrophic biological risks (GCBRs), and pandemic preparedness – which testifies to a biologically grounded realism about our bodies. EA work on nuclear security often incorporates a visceral horror at how thermonuclear weapons can burn, blast, and mutate human bodies. Some EA animal welfare work focuses on how selective breeding and factory farms undermine the anatomy, endocrinology, and immune systems of domesticated animal bodies.  Of course, EA’s emphasis on brains over bodies is not just a set of nerdy, sapiosexual, antinatalist, knowledge-worker biases. There are more principled reasons for prioritizing brains over bodies as ‘cause areas’, grounded in EA’s consequentialism and sentientism. Even since Bentham and Mill, utilitarians have viewed moral value as residing in brains capable of experience pleasure and pain. And ever since Peter Singer’s [Animal Liberation](https://en.wikipedia.org/wiki/Animal_Liberation_(book)) book in 1975, animal welfare has been viewed largely through a sentientist lens: the animal’s sentient experiences in their brains are considered more ethically relevant than the survival and reproduction of their bodies. Reconciling sentientist consequentialism with a respect for body values is an important topic for another essay. Brains are cool. I get it. I’ve been fascinated with brains ever since I took my first neuroscience course as an undergrad in 1985. I’ve devoted the last 37 years of my academic career to researching, writing, and teaching about human minds and brains. But there’s more to our lives than our nervous systems, and there’s more to our interests as human beings than what our brains think they want. **If we’re just aligning with brains, how much of the body are we really aligning with?** To overcome these brain-over-body biases, it might help to do some thought exercises.  Imagine we want AI systems to align with our entire phenotypes – our whole bodies – and not just our brains. How representative of our embodied interests are our brains?  Let’s do a survey: * By weight, the typical person has a [1,300 gram brain](https://www.sciencedirect.com/topics/immunology-and-microbiology/brain-weight) in a [70-kg body](https://en.wikipedia.org/wiki/Human_body_weight); so the brain is about 2% of body mass * By cell-type, brains are mostly made of 2 cell types (neurons and glia), whereas the body overall includes about [200 cell types](https://www.nature.com/scitable/blog/bio2.0/discovering_new_cell_types_one/), so the brain includes about 1% of cell types * By cell-count, [brains](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5063692/) include about 80 billion neurons and 80 billion glia cells, whereas the [body overall](https://www.nationalgeographic.com/science/article/how-many-cells-are-in-your-body#:~:text=Adding%20up%20all%20their%20numbers,37.2%20trillion%20cells.) includes about 30 trillion cells; so the brain includes about 0.5% of the body’s cells * By organ-count, the brain is one organ out of about [78 organs](https://byjus.com/biology/what-are-the-78-organs-in-the-human-body/) in the human body, so the brain is about 1.3% of the body’s organs If the human phenotype was a democracy, where organs got to vote in proportion to their weight, cell types, cell counts, or organ counts, brains would get somewhere between 0.5% and 2% of the body’s votes. If AI is aligned only with our brains, it might be aligning with only about 1% of our whole human bodies, and we’d leave 99% unrepresented and unaligned. Another way to look at the human phenotype’s values and preferences is from the viewpoint of [selfish gene theory](https://en.wikipedia.org/wiki/Gene-centered_view_of_evolution) and [disposable soma theory](https://en.wikipedia.org/wiki/Disposable_soma_theory_of_aging). The human brain is arrogant. It thinks it’s in charge, and should be in charge. However, from an evolutionary gene-centered view, the gonads are really where the action is. The ‘immortal germline replicators’ (as Richard Dawkins called them in [*The Selfish Gene*](https://en.wikipedia.org/wiki/The_Selfish_Gene)) are carried in testes and ovaries. Everything else in the body is just an evolutionary dead end – it’s a ‘disposable soma’. The somatic cells outside the gonads are just there to protect, nourish, and help replicate the sperm and eggs in the gonads. From that perspective, the brain is just helping the genes in the gonads make more genes in next generation’s gonads. The brain’s values and preferences may or may not be aligned with the evolutionary interests of the germ-line replicators in the gonads. From a longtermist evolutionary perspective, maybe AI systems should try to be aligned with the interests of the immortal germ-line replicators, not just the transient, disposable brains the evolved to represent their interests. (More on this in another essay.) **How brain-over-body biases can increase AI X-risk** When we think about existential risks from AI, many EAs focus on the dangers of superintelligence growing misaligned from human intelligence, pursing different abstract goals, and quietly taking over our world through the Internet. Hollywood depictions of Terminator-style robots physically imitating, hunting, and killing human bodies are considered silly distractions from the real business of aligning artificial brains with human brains. Indeed, some EAs believe that if a superintelligence offered a credible way to upload our minds into faster processors, even at the cost of killing our physical human bodies, that would count as a win rather than a loss. In this view, a transhumanist future of post-human minds colonizing the galaxy, without any human bodies, would be considered a victory rather than an AI apocalypse. This is perhaps the strongest example of the EA brain-over-body bias.  You might well be asking, so what if EAs have brain-over-body biases? Does it really matter for AI alignment, and for minimizing existential risks (X risks)? Can’t we just ignore bodies for the moment, and focus on the real work of aligning human brains and artificial brains? Consider one example from narrow AI safety: self-driving cars. When we’re designing AI systems to safely control our cars, we don’t just want the car’s AI to act in accordance with our brain’s preferences and values. Our number one priority is for the car not to crash in a way that squishes our body so we die. The best way to keep our bodies safe isn’t just for the AI to model our brains’ generic preference for life over death. It’s for the AI system designers to model -- in grisly, honest, and biomedically grounded detail, the specific types of crashes that could cause specific kinds of injuries to specific parts of our bodies.  Full AI alignment for self-driving cars would require, at least implicitly, alignment with the hundreds of specific physical vulnerabilities of the specific human bodies that are actually in the car right now. From the perspective of an AI in a self-driving car, given its millisecond-response-rate sensors and multi-gigahertz processors, every crash happens in excruciatingly slow motion. There are plenty of ways to use steering, braking, acceleration, evasive maneuvers, air bag deployment, etc., to influence how the crash plays out and what kinds of injuries it causes to occupants. As a professor, I’d want my car’s AI to manage the crash so it prioritizes protecting my eyes (for reading), my brain (for thinking), and my hands (for typing). But if I’m a professional dancer, I might want it to put a slightly higher priority on protecting my knees, ankles, and spine. If I’m a parent, I might want it to put a higher priority on protecting my baby in their right rear car seat than on protecting me in the front left driver’s seat. If I’m driving my elderly parent around, and the AI knows from their medical records that they recently had their right hip joint replaced, I might want it to put a priority on reducing the crash’s likely impact on that leg. In general, we want self-driving cars to understand our specific body values and vulnerabilities, not just our brain values. These body values cannot be reduced to the kinds of hypothetical trolley problems that ask for people’s stated preferences about the acceptability of harming different kinds of car occupants and pedestrians (e.g. [this](https://www.pnas.org/doi/10.1073/pnas.1911517117).) Narrow AI systems for biomedical applications also need to understand body values. These could include AI-controlled surgery robots, autonomous ambulances, robotic health care workers, telehealth consultants, etc. In each case, the AI doesn’t just need to model human preferences (e.g. ‘I don’t want to die please’); it also needs to actually understand the human body’s thousands of adaptations at a very granular, biological level that can guide its medical interventions. This would include, for example, the AI needing to model the goal-directed [homeostatic mechanisms](https://en.wikipedia.org/wiki/Homeostasis) that control blood pressure, blood sugar, body temperature, fluid balance, extracellular pH levels, etc. Similar issues arise with the safety of narrow AI systems controlling industrial robots with human workers’ bodies nearby, or controlling military weapons systems with civilian bodies nearby. We want the AI systems to be aligned with all the organs, tissues, and cells of all the human bodies nearby, not just with the conscious values in their brains.  Military applications could be especially worrisome, because the better a benevolent AI system can get aligned with human body values and vulnerabilities, the more easily a hostile AI system could copy and invert those body values, treating them as vulnerabilities, in order to impose injury or death in precisely targeted ways. Consider [scene 86](https://imsdb.com/scripts/Terminator-2-Judgement-Day.html) in *Terminator 2: Judgment Day* (1991), when the ‘good’ T-800 Terminator, played by Arnold Schwarzenegger, is suturing Sarah Conner’s stab wounds that were inflicted by the misaligned, liquid metal T-1000. Reassuring her about his biomedical knowledge, the T-800 says ‘I have detailed files on human anatomy’. Sarah says ‘I’ll bet. Makes you a more efficient killer, right?’. He says ‘Correct’. Detailed understanding of human body values can be used both to inflict maximum damage, and to offer maximally effective medical care. When AI alignment researchers think about X risks to humanity, there’s a tendency to ignore these kinds of body values, and to treat our human interests way too abstractly. Mostly, ordinary folks just want the AI systems of the future not to kill their bodies. They don’t want the AI to do a drone strike on our house. They don’t want it to turn their bodies into paperclips. They don’t want it to use thermonuclear weapons on their bodies. Alignment with our brain values is often secondary to alignment with our body values. Note that this argument holds for any future situation in which our minds are grounded in any substrate that could be viewed as a sort of ‘physical body’, broadly construed, and that’s vulnerable to any sort of damage. If our heads are cryopreserved in steel cylinders at the Alcor facilities in Arizona, then those cylinders are our new bodies, and we would want AI guardians watching over those bodies to make sure that they are safe against physical attack, cybersecurity threats, financial insolvency, and ideological propaganda – for centuries to come. If we’re uploaded to orbital solar-powered server farms, and our minds can’t survive without those computational substrates working, then those server-satellites are our new bodies, and they will have body values that our AI guardians should take into account, and that might be quite different from our mind’s values. So, one failure mode in AI alignment is to focus too much on what our brains want, and not enough on what could mess up our bodies – whatever current or future forms they happen to take. The concept of body values provides a bridge between narrower issues of AI alignment, and broader issues of AI health and safety. Certainly, avoiding catastrophic damage to the human body seems like a fairly obvious goal to pursue in designing certain autonomous AI systems such as cars or robots. However, embodied values get a lot more numerous, diverse, subtle, and fine-grained than just our conscious preference for AI systems not to break our bones or crush our brains.  **Can we expand the moral circle from brains to bodies?** Maybe one approach to incorporating body values into AI alignment research is to keep our traditional EA consequentialist emphasis on sentient well-being, and simply expand our moral circle from brains to bodies. This could involve thinking of bodies as a lot more sentient than we realized. (But, as we’ll see, I don’t think that really solves the problem of body values.) Peter Singer famously argued in a 1981 [book](https://en.wikipedia.org/wiki/The_Expanding_Circle) that a lot of moral progress involves humans expanding the ‘moral circle’ of who’s worthy of moral concern – e.g. from the self, to family members, to the whole tribe, to the whole human species, to other species.  Post hoc, from our current sentientist perspective, this looks like a no-brainer – it’s just a matter of gradually acting nicer towards more and more of the beings that are obviously sentient.  However, historically, when these moral battles were being fought, expanding the moral circle often seemed like a matter of expanding the definition of sentience itself. How to do so was usually far from obvious.  To a typical animal with a high degree of nepotism (concern for close blood relatives, due to kin selection), but no tribalism (concern for other group members, due to reciprocal altruism and multi-level selection), blood relatives may seem sentient and worthy of moral concern, but non-relatives may not. To a prehistoric hunter-gatherer, people within one’s tribe may seem sentient, but people in other tribes speaking other languages can’t express their values in ways we can understand, so they are typically dehumanized as less than sentient. To a typical anthropocentric human from previous historical eras, all humans might be considered sentient, but nonhuman animals were usually not, because they can’t even express their preferences in any language. Expanding the moral circle often required rethinking what sentience really means, including which kinds of beings have morally relevant preferences, interests, and values, and how those values are mentally represented within the individuals and articulated to other individuals.  Let’s zoom in from moral circle expansion at the grand scale, and consider the individual scale.  The moral circle is centered on the ‘self’. But what is this ‘self’? What parts of the self should be included in the moral circle? Only the parts of the cerebral cortex that can verbally articulate the brain’s interests through stated preferences? Or should we also include parts of the brain that can’t verbally state their preferences, but that can guide behavior in a way that reveals implicit preferences? Does the ethically relevant self include only the cerebrum, or does it also include the revealed preferences of the diencephalon, midbrain, and pons? Does the self include spinal reflexes, sensory organs, the peripheral nervous system, the autonomic nervous system, and the enteric nervous system? Does the self include the rest of our body, beyond the nervous system? Sentience seems easy to spot where we’re looking at central nervous systems like vertebrate brains. Those kinds of brains embody preferences that clearly guide movement towards some kinds of stimuli and away from other kinds of stimuli, and that generate reward and punishment signals (pleasures and pains) that clearly guide reinforcement learning.  However, sentience gets trickier to spot when we’re looking at, say, the gut’s [enteric nervous system](https://en.wikipedia.org/wiki/Enteric_nervous_system), which can operate independently of the brain and spinal cord. This system coordinates digestion, including peristalsis, segmentation contractions, and secretion of gastrointestinal hormones and digestive enzymes. The enteric nervous system uses more than 30 neurotransmitters, and contains about 90% of the body’s serotonin and 50% of the body’s dopamine. It [includes](https://pubmed.ncbi.nlm.nih.gov/24997029/) some 200-600 million neurons, distributed throughout two major plexuses (the myenteric and submucosal plexuses), and thousands of small ganglia. Its complexity is comparable to that of central nervous systems in other species that EAs generally consider sentient – e.g. zebrafish have about 10 million neurons, fruit bats have about 100 million, pigeons have about 300 million, octopuses have about 500 million. Moreover, the enteric nervous system [can do](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6834869/) a variety of learning and memory tasks, including habituation, sensitization, long term facilitation, and conditioned behavior. Should the enteric nervous system be considered sentient? I don’t know, but I think it has some implicit, evolved preferences, values, and homeostatic mechanisms that we might want AI systems to become aligned with. EA consequentialism tends to assume that ethically relevant values (e.g. for AI alignment) are coterminous with sentience. This sentientism gets tricky enough when we consider whether non-cortical parts of our nervous system should be considered sentient, or treated as if they embody ethically relevant values. It gets even tricker when we ask whether body systems outside the nervous system, which may not be sentient in most traditional views, carry values worth considering. **Do bodies really have ‘values’?** You might be thinking, OK, within the ‘self’, maybe it’s reasonable to expand the moral circle from the cerebral cortex to subcortical structures like the diencephalon, midbrain, pons, and to the peripheral, autonomic, and enteric nervous systems. But shouldn’t we stop there? Surely non-neural organs can’t be considered to be sentient, or to have ‘values’ and ‘preferences’ that are ethically relevant?  My intuitions are mixed. I can see both sides of this issue. When that happens, I often run a Twitter poll to see what other folks think. On Sept 18, 2022, I ran this poll, with these results: ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/f5f8256147a44e0ea81ada6a6589405bd758cd81bc1b688f.png) My typical follower is a centrist American male, and only about 1% of my followers (1,553 out of 123,900) responded to this poll. This is far from a globally representative sample of humans, and this poll should not be taken too seriously as data. Its only relevance here is in showing that people have quite mixed views on this issue. Many (35%) think human bodies do, literally, have implicit, unconscious values and preferences. Many others (40%) think they do not. Some (9%) think they do metaphorically but not literally. Let’s see if there’s any sense in which bodies might embody values, whether literally or metaphorically. **Embodied goals, preferences, and motivations** In what possible sense does the human body have values that might be distinct from the brain’s conscious goals or unconscious preferences? Are non-sentient, corporeal values possible?  In [control theory](https://en.wikipedia.org/wiki/Control_theory) terms, a thermostat has designed-in ‘goals’ that can be understood through revealed preferences, e.g. ‘trying’ to keep a house within a certain temperature range. The thermostat does not need to be fully sentient (capable of experiencing pleasure or pain) to have goals.  If the thermostat can be said to have goals, then every homeostatic mechanism in the body also has ‘goals’, evolved rather than designed, that can be understood through analyzing the body’s revealed preferences (e.g. ‘trying’ to keep body temperature, blood glucose, estradiol, and muscle mass within certain optimal ranges). Thus, we can think of the body as a system of ‘embodied motivations’ (values, preferences, goals) that can be understood through an evolutionary, functional, adaptationist analysis of its organs, tissues, and cells.  There’s an analogy here to the concept of ‘[embodied cognition’](https://en.wikipedia.org/wiki/Embodied_cognition) – the idea that a lot of our goal-directed behavior doesn’t just arise from the brain in isolation, but depends on an adaptive interplay between brain, body, and environment, and cannot be understood accurately without explicitly considering the specific physical features and capabilities of bodies.  For example, a standard cognitivist approach to understanding hunger and goal-directed eating might focus on the brain’s mental representations of hunger stimuli, whereas an embodied cognition approach would also talk explicitly about the structure, physiology, and innervation of the stomach and gut, the release and uptake of hunger-related hormones leptin and ghrelin, and the interaction between the gut microbiome and the human host body.  Here, I’m arguing that if we consider the entire human phenotype – body, brain, and behavior – then a lot of our human values are highly embodied. We could call this the domain of embodied volution, embodied motivation, or embodied values. (I use the terms ‘body values’, ‘embodied values’, and ‘corporeal values’ more or less interchangeably in this essay.) Just as the field of embodied cognition has developed new terms, ideas, theories, and models for understanding how the brain/body system as a whole processes information and guides behavior, a field of ‘embodied values’ might need to develop new terms, ideas, theories, and models for understanding how the brain/body system as a whole pursues certain preferences, values, and goals  – especially if we want to build AI systems that are aligned with the full range of our embodied values. **Aligning with embodied values requires detailed, evolutionary, functional analysis of bodily adaptations** Imagine we take seriously the idea that AI alignment should include alignment with embodied values that might not be represented in the nervous system the way that more familiar sentient values are. How do we proceed?  With brain values, we can often just ask people what they want, or have them react to different options, or physically demonstrate what they’d prefer. We don’t need a detailed functional understanding of where those brain values come from, how they work, or what they’re good for. However, with body values, we can’t just ask what our gut microbiome wants, what our liver wants, or what our anti-cancer defenses want. We need to actually do the evolutionary biology and evolutionary medicine. AI alignment with body values would require AI to model everything we learn about how human bodies work.  If this argument is correct, it means there may not be any top-down, generic, all-purpose way to achieve AI alignment until we have a much better understanding of the human body’s complex adaptations.  If Artificial General Intelligence is likely to be developed within a few decades, but if it will take more than a few decades to have a very fine-grained understanding of body values, and if body values are crucial to align with, then we will not achieve AGI alignment. We would need, at minimum, a period of Long Reflection focused on developing better evolutionary medicine models of body values, before proceeding with AGI development.  Aligning with embodied values might also require different input/output channels for AI systems. We’re used to thinking that we’ll just communicate with AI systems through voice, keyboard, face, and gesture – all under the brain’s voluntary control. However, alignment with body values might require more intrusive biomedical sensors that actually track the interests and well-being of various bodily systems. People involved in the ‘[quantified self’](https://en.wikipedia.org/wiki/Quantified_self) movement already try to collect a lot of this kind of data, using sensors that might be useful to AI systems. Whether we would want AI systems to be able to directly affect our physiology – e.g. through direct control over pharmaceuticals, hormones, or other biomedical interventions – is an open question.  **What difference would it make if AI alignment considered embodied values?** What are some examples where an ‘embodied-values’ approach to AI alignment would differ from a standard ‘brain-values-only’ approach? 1. Caring for the microbiome. The human body hosts a complex [microbiome](https://en.wikipedia.org/wiki/Human_microbiome) – an ecology of hundreds of different microscopic organisms such as bacteria that are found throughout our skin, hair, gut, and other organs. Human health depends on a healthy microbiome. But the microbiome doesn’t have a brain, and can’t state its preferences. It has different DNA than we do, and different genetic interests. Human brains didn’t even know that human bodies contained microbiomes until a few decades ago. And medicine didn’t understand the microbiome’s importance until after 1980s, when Barry Marshall [showed](https://www.lindau-nobel.org/on-man-and-microbes-barry-marshall/) that helicobacter pylori can cause ulcers (and then got the Nobel prize in 2005). If an AI system is aligned with the human brain, but it ignores the microbiome hosted within the human body, then it won’t be aligned with human interests (or the microbiome’s interests). 2. Caring for a fetus. Female human bodies can get pregnant, and a lot of [adaptive physiology](https://academic.oup.com/book/36756/chapter-abstract/321856620) goes on in pregnancy between the mother’s body, the uterine lining, the placenta, and the fetus, that is not consciously accessible to the mother’s brain. Yet the outcome of the adaptive physiology in pregnancy matters enormously to pregnant mothers. It can make the difference between a spontaneous abortion, a miscarriage, a stillbirth, and a healthy baby. For an AI system to be fully aligned with a pregnant mother’s values and interests, it should be able to represent and care for the full range of physiological dynamics happening within her reproductive system and her offspring. 3. Protecting against cancer. Cells in the human body often undergo spontaneous mutations that turn them into runaway replicators, i.e. cancer cells, that develop ‘selfish’ agendas (reproduce and spread everywhere) that are contrary to the body’s general long-term interests. In response, bodies have evolved many [anti-cancer defenses](https://www.cheatingcell.com/) that embody the revealed preference of ‘try not to die of cancer, especially when young’). Most human brains have no idea that this arms race between incipient cancers and anti-cancer defenses is going on, every day, right under our noses. Yet, the body has genuine ‘embodied values’ to avoid runaway cancer growth that would undermine survival and reproduction. Any AI system that doesn’t track exposure to carcinogenic chemicals, incipient cancers, and the state of anti-cancer defenses, wouldn’t really be aligned with the body’s embodied value of reducing cancer risk. 4. Promoting longevity. Human bodies evolved to live surprisingly long lives, even by the long-lived standards of mammals and social primates. Our bodies include lots of anti-aging adaptations design to extend our survival and reproductive longevity. The evolutionary biology subfields of [life history theory](https://en.wikipedia.org/wiki/Life_history_theory#Human_life_history), including senescence theory, model how our longevity adaptations evolve, and how we developed embodied values to promote longer life-spans. Our brains also evolved to promote longevity, but they tend to do so by perceiving external threats such as predators, parasites, pathogens, and aggressive rivals, and coordinating behaviors to avoid or overcome those threats. Our brains didn’t evolve to track the hundreds of other longevity-promoting adaptations inside our bodies, that don’t require external sensory perception or coordinated whole-body behaviors to cope with. Thus, there’s a gap between what our brains think is crucial to longevity (e.g. avoid getting eaten by predators, avoiding getting into fights with psychopaths), and what our bodies think is crucial to longevity (e.g. eating nutritious foods, preserving the microbiome, exercising enough to maintain muscles and bones, etc.) Often, there are conflicts of interest between what the brain wants (e.g. more donuts) and what our embodied longevity values would want (e.g. avoid donuts, eat leafy greens). Of course, among humans who happy to absorb accurate nutritional insights from medical research, their brains might internally represent this conflict between valuing donuts and valuing leafy greens. But not everyone has gotten the message – and historically, much of the public nutrition advice has been based on bad science, and is not actually aligned with the body’s long-term interests. Thus, there can be cases where our embodied longevity values deviate dramatically from what our brains think they want. So, which should our AI systems align with – our brains’ revealed preferences for donuts, or our bodies’ revealed preferences for leafy greens? **Benefits of considering embodied values in AI alignment** I think there are several good reasons why AI alignment should explicitly try to integrate embodied values into alignment research. First, handling the full diversity of human types, traits, and states. We might want AI systems that can align with the full range of humans across the full range of biological and psychological states in which we find them. At the moment, most AI alignment seems limited to incorporating goals and preferences that physically healthy, mentally health, awake, sentient adults can express through voluntary motor movements such as through the vocal tract (e.g. saying what you want), fingers (e.g. typing or clicking on what you want), or larger body movements (e.g. showing a robot how to do something). This makes it hard for AI systems to incorporate the embodied values and preferences of people who are asleep, in a coma, under general anesthetic, in a severely depressed state, in a state of catatonic schizophrenia, on a psychedelic trip, suffering from dementia, or preverbal infants. None of these people are in a condition to do cooperative inverse reinforcement learning (CIRL), or most of the other proposed methods for teaching AI systems our goals and preferences. Indeed, it’s not clear that the brains of sleeping, comatose, or catatonic people have ‘goals and preferences’ in the usual conscious sense. However, their bodies still have revealed preferences, e.g. to continue living, breathing, being nourished, being safe, etc. Second, the brain’s conscious goals often conflict with the body’s implicit biological goals. Let’s consider some examples where we might really want the AI system to take the body’s goals into account. Assume that we’re dealing with cases a few years in the future, when the AI systems are general-purpose personal assistants, and they have access to some biomedical sensors on, in, or around the body. Anorexia. Suppose an AI system is trying to fulfil the preferences of an anorexic teenaged girl: her brain might say ‘I’m overweight, my body is disgusting, I shouldn’t eat today’, but  her body might be sending signals that say ‘If we don’t eat soon, we might die soon from electrolyte imbalances, bradycardia, hypotension, or heart arrhythmia’.  Should the AI pay more attention to the girl’s stated preferences, or her body’s revealed preferences? Suicidal depression. Suppose a college student has failed some classes, his girlfriend broke up with him, he feels like a failure and a burden to his family, and he is contemplating suicide. His brain might be saying ‘I want to kill myself right now’, but his body is saying ‘Actually every organ other than your brain wants you to live’.  Should the AI fulfill his brain’s preferences (and help arrange the suicide), or his body’s preferences (and urge him to call his mom, seek professional help, and remember what he has to live for)? Similar mismatches between what the brain wants and what the body wants can arise in cases of drug addiction, drunk driving, extreme physical risk-taking, etc. Athletic training. Suppose AI/robotics researchers develop life-sized robot sparring partners for combat sports. A woman has a purple belt in Brazilian jujitsu (BJJ), and she’s training for an upcoming competition. She says to her BJJ sparring robot ‘I need a challenge; come at me as hard as you can bro’. The robot’s AI needs to understand not just that the purple belt is exaggerating (doesn’t actually want it to use its full strength); it also needs a very accurate model of her body’s biomechanics, including the locations, strengths, and elasticities of her joints, ligaments, sinews, muscles, and blood vessels, when using [BJJ techniques](https://en.wikipedia.org/wiki/List_of_Brazilian_jiu-jitsu_techniques). If the robot gets her in a joint lock such as an arm bar, it needs to know exactly how much pressure on her elbow will be too little to matter, just enough to get her to tap out, or too much, so she gets a serious elbow strain or break. If it gets her in a choke hold such as a triangle choke, it needs to understand exactly how much pressure on her neck will let her escape, versus lead her to tap out, versus compress her carotid artery to render her unconscious, versus kill her. She may have no idea how to verbally express her body’s biomechanical capabilities and vulnerabilities to the robot sparring partner. But it better get aligned with her body somehow – just as her human BJJ sparring partners do. And it better not take her stated preferences for maximum-intensity training too seriously. **Cases where AI systems should prioritize brain values over body values** Conversely, there may be cases where a person (and/or their friends and family members) might really want the AI to prioritize the brain’s values over the body’s values. Terminal disease and euthanasia. Suppose someone has a terminal disease and is suffering severe chronic pain. Their life is a living hell, and they want to go. But their body is still fighting, and showing revealed preferences that say ‘I want to live’. Advance care directives (‘living wills’) are basically legally binding statements that someone wants others to prioritize their brain values (e.g. stop suffering) over their body values – and we might want AI biomedical care systems to honor those directives. Cryopreservation and brain uploading. Suppose someone elderly is facing a higher and higher chance of death as they age. Their brain would prefer for their body to undergo [cryopreservation](https://en.wikipedia.org/wiki/Cryopreservation) by Alcor, or whoever, in hopes of eventual resuscitation and anti-aging therapies. But their body still works mostly OK. Should their AI system honor their cryopreservation request – even if it results in technical death by legal standards? Or, further in the future, the brain might want to be uploaded through a [whole-brain emulation](https://en.wikipedia.org/wiki/Mind_uploading) method. This would require very fine-scale dissection and recording of brain structure and physiology, that results in the death of the body. Should the AI system concur with destructive dissection of the brain, contrary to the revealed preferences of the body? Self-sacrifice. People sometimes find themselves in situations where they can save others, at the possible cost of their own life. Heroic self-sacrifice involves the brain’s altruism systems over-riding the body’s self-preservation systems. Think of soldiers, fire fighters, rescue workers, and participants in high-risk clinical trials. Should the AI side with the altruistic brain, or the self-preserving body? In other cases, someone’s brain might be willing to sacrifice their body for some perceived greater good – as in the case of religious martyrdom. Should an AI allow a true believer to do a suicide bombing, if the martyrdom is fully aligned with their brain’s values, but not with their body’s revealed preferences? **Conclusion** I’ve argued for a bottom-up, biologically grounded approach to AI alignment that explicitly addresses the full range and variety of human values. These values include not just stated and revealed values carried in the central nervous system, but evolved, adaptive goals, preferences, and values distributed throughout the human body. EA includes some brain-over-body biases that make our body values seem less salient and important. However, the most fundamental challenge in AI safety is keeping our bodies safe, by explicitly considering their values and vulnerabilities. Aligning to our brain values is secondary.
56fb13d4-9cef-47b4-b120-e7ae0e6066c9
trentmkelly/LessWrong-43k
LessWrong
Pitfalls with Proofs This post distills some excellent writing on decision theory from Abram Demski about spurious proofs and Troll Bridge problems, and it has significant parallels with this work. It recently won a $250 prize in UC Berkeley Effective Altruism's AI safety distillation contest. The goal of this post is to make two difficult-to-understand topics, spurious proofs and troll bridge problems, as easy to understand as possible. An Innocent Question A highly intelligent system should have the ability to reason with logic. After all, being logical is a lot more useful than being illogical. Let’s design such a system – a robot named Rob.  The goal of a logical system is to prove things from axioms. Axioms are facts that are assumed to be true with no need for justification. They can either be facts about objects or inference rules.  For example, “0 is a natural number” and “for any natural number, n, Successor(n) is also a natural number” are both axioms of a widely-studied logical system called Peano Arithmetic. Axioms are often applied to other axioms. For example, I can use the two aforementioned ones to prove that Successor(0), a.k.a. the number 1, is a natural number. We call such a combination of axioms that reveals a new fact a proof and the result a theorem.  The goal of using logic is to teach ourselves useful things by proving them. And toward this end, we sure hope that our logical system can’t prove contradictions. If so, it wouldn’t be useful at all! [1] Now back to Rob. Suppose we want him to use some logical system by which he reasons about himself and the world. Exactly what logical axioms he uses aren’t important as long as they are sufficiently expressive.[2] As we are tinkering away in the process of designing him to be highly-intelligent, suppose that Rob asks us a simple question.  Answering this question will lead us through some strange dilemmas, but when we’re done, understanding the solution will teach an important lesson about instrumental rational
19b17d97-5186-4799-99b5-4a3dfe9cab0f
trentmkelly/LessWrong-43k
LessWrong
Map and territory: Natural structures This will be a very short post which simply defines one term which I find useful when discussing the map and the territory. I find it very useful to have a term that helps clarify that the map is not completely arbitrary and that there are things in the territory that are natural candidates for appearing in the map. For example, for the Ship of Thesus, one natural candidate is the pure, original, unmodified ship; another are the fixed percentages (ie. 50% original); another would be a continuity based measure. If you are asked to create a definition of what counts as the Ship of Thesus, these are some of the first ideas that you would come up with, although you would of course need to define it in much, much more detail to get all the way down to the level of the territory. Or suppose you are trying to define what is meant by table. Again, the definition is purely arbitrary and whatever you choose, but there are certain natural structures in reality that pop out at you. One might be all four-legged, non-living objects with a flat top, another might relax the four-legged requirement so that it only required four legs at one particular time, ect. When I'm explaining that a particular concept has been reified, it greatly clarifies my position to explain that I don't believe that the concept is empty, but there is *something* behind it that leads us to want that word. That something is really not a single thing (or else it would be real, not reified), but a collection of closely related 'natural structures'. Each of the definitions provided for the Ship of Thesus or a table corresponds to a different natural structure, while the term itself appears in the map. I hope you find this word useful too, but if you have any suggestions for a better term, please mention it in the comments.
a01316c7-2280-402a-883e-3cdfab44fe95
trentmkelly/LessWrong-43k
LessWrong
July 2020 gwern.net newsletter None
531ae57f-a26c-49be-954a-cb409a7a7f87
trentmkelly/LessWrong-43k
LessWrong
Against Almost Every Theory of Impact of Interpretability Epistemic Status: I believe I am well-versed in this subject. I erred on the side of making claims that were too strong and allowing readers to disagree and start a discussion about precise points rather than trying to edge-case every statement. I also think that using memes is important because safety ideas are boring and anti-memetic. So let’s go! Many thanks to @scasper, @Sid Black , @Neel Nanda , @Fabien Roger , @Bogdan Ionut Cirstea, @WCargo, @Alexandre Variengien, @Jonathan Claybrough, @Edoardo Pona, @Andrea_Miotti, Diego Dorn, Angélina Gentaz, Clement Dumas, and Enzo Marsot for useful feedback and discussions. When I started this post, I began by critiquing the article A Long List of Theories of Impact for Interpretability, from Neel Nanda, but I later expanded the scope of my critique. Some ideas which are presented are not supported by anyone, but to explain the difficulties, I still need to 1. explain them and 2. criticize them. It gives an adversarial vibe to this post. I'm sorry about that, and I think that doing research into interpretability, even if it's no longer what I consider a priority, is still commendable. How to read this document? Most of this document is not technical, except for the section "What does the end story of interpretability look like?" which can be mostly skipped at first. I expect this document to also be useful for people not doing interpretability research. The different sections are mostly independent, and I’ve added a lot of bookmarks to help modularize this post. If you have very little time, just read (this is also the part where I’m most confident): * Auditing deception with Interp is out of reach (4 min) * Enumerative safety critique (2 min) * Technical Agendas with better Theories of Impact (1 min)   Here is the list of claims that I will defend: (bolded sections are the most important ones) * The overall Theory of Impact is quite poor * Interp is not a good predictor of future systems * Auditing dece
91ecf129-1cd0-45c1-8ebf-8d66ee29a1e4
trentmkelly/LessWrong-43k
LessWrong
No Good Logical Conditional Probability Fix a theory T over a language L. A coherent probability function is one that satisfies laws of probability theory, each coherent probability function represents a probability distribution on complete logical extensions of T. One of many equivalent definitions of coherence is that P is coherent if P(s1)+P(s2)+…+P(sk)=1 whenever T can prove that exactly one of s1,…,sk is true. Another very basic desirable property is that P(s)=1 only when s is provable. There have been many proposals of specific coherent probability assignments that all satisfy this basic requirement. Many satisfy stronger requirements that give bounds on how far P(s) is from 1 when s is not provable. In this post, I modify the framework slightly to instead talk about conditional probability. Consider a function P which takes in a consistent theory T and a sentence s, and outputs a number P(s|T)∈[0,1], which represents the conditional probability of s given T. We say that P is coherent if: 1. P(s1|T)+P(s2|T)+…+P(sk|T)=1 whenever T can prove that exactly one of s1,…,sk is true, and 2. P(s∧r|T)=P(r|T)⋅P(s|T∪{r}). 3. If s proves every sentence in T, then P(s|R∪T)≥P(s|R). Theorem: There is no coherent conditional probability function P such that P(s|T)=1 only when T proves s. Proof: This proof will use the notation of log odds ℓ(p)=log2(p1−p) to make things simpler. Let P be a coherent conditional probability function. Fix a sentence s which is neither provable nor disprovable from the empty theory. Construct an infinite sequences of theories as follows: 1. T0 is the empty theory. 2. To construct Tn+1, choose a sentence rn such that neither s→rn nor s→¬rn are provable in Tn. If P(s∧rn|Tn)≤P(s∧¬rn|Tn), then let Tn+1=Tn∪{s→rn}. Otherwise, let Tn+1=Tn∪{s→¬rn}. Fix an n, and without loss of generality, assume P(s∧rn|Tn)≤P(s∧¬rn|Tn). Since P is coherent we have P(s∧r|Tn)+P(s∧¬r|Tn)=P(s|Tn). In particular, this means that P(s∧r|Tn)≤12P(s|Tn). Observe that P(s∧(s→r)|Tn)=P(s|Tn+1)P(s→r|Tn), a
d0841e9a-d59b-4fc0-8b42-23fcf1f67db3
trentmkelly/LessWrong-43k
LessWrong
Updated Deference is not a strong argument against the utility uncertainty approach to alignment Thesis: The problem of fully updated deference is not a strong argument against the viability of the assistance games / utility uncertainty approach to AI (outer) alignment. Background: A proposed high-level approach to AI alignment is to have the AI maintain a probability distribution over possible human utility functions instead of optimizing for any particular fixed utility function. Variants of this approach were advocated by Stuart Russell in Human Compatible and by Hadfield-Menell et al in the CIRL paper. Adding utility uncertainty intuitively seems to provide a number of safety benefits relative to having a fixed objective, including: 1. Utility uncertainty gives the AI an incentive to adjust in response to a human operator's corrective actions. 2. Utility uncertainty weakens the AI's incentive to harm its human operators, since this might result in a permanent loss of utility-relevant information. 3. Utility uncertainty incentivizes the AI to avoid irreversible changes to the state of the world, since those might lead to permanently low utility. Despite the high profile and intuitive appeal of utility uncertainty, almost none of the alignment researchers I know consider it a promising approach to AI alignment. The most common reason cited seems to be the problem of fully updated deference (e.g. Richard Ngo's alignment research exercises point to this as the reason for why CIRL doesn't solve the alignment problem). In this post I will argue why fully updated deference should not be seen as a strong argument against utility uncertainty as approach to AI alignment. This is not meant as an argument in favor of the uncertainty approach; it may have other irresolvable difficulties which I discuss briefly in the conclusion. Outline: The Arbital post that seems to be the canonical reference for updated deference contains many heuristic arguments and one concrete, worked-out example in the section Moral uncertainty and its relation to corrigibility. I will mo
a39a9968-e5d7-4ef6-a893-73cc3451e0a0
trentmkelly/LessWrong-43k
LessWrong
Playing the game vs. finding a cheat code This is a linkpost from my blog De Novo. Imagine a new Pokémon game has just come out, and you really want to catch a Zapdos. It’s listed in the game’s Pokédex, so you know it must be possible to catch, but you’re not sure how. You could either: 1. Play the game normally. There are some hints that Zapdos like to hang out in thunderclouds so you could try looking there. 2. Randomly mash buttons and look for any weird glitches which you can exploit to give yourself a Zapdos. Zapdos is a legendary lightning-bird Pokémon that looks pretty cool. Advantages of playing normally * You know it’s possible to get a Zapdos this way, it will just take a while and you’ll need to explore thunderclouds a lot. * By exploring thunderclouds, you might learn more about them. * If you get a Zapdos using a glitch, it’s possible that the save file might be corrupted in some way and your Zapdos won’t work quite right.  Advantages of button mashing * Other players have found glitches to get other Pokémon, and although nobody has got a Zapdos yet, it seems plausible that it could work. * By finding glitches, you might learn more about the internal logic of the game. * If you have the code of the game to examine, you can more easily find glitches. * Once you’ve found the proper sequence of button mashes, it’s a lot easier to get a Zapdos than by playing the game. This is good if you want to get more of them, or if you want to share the cheat code you’ve found with other people. Gotta grow ‘em all Now, I don’t care so much about catching rare Pokémon, but I do really want to grow an oocyte in cell culture. Again, I have two choices for my general approach: * Try to mimic the developmental signaling processes of the ovary, going from pluripotent stem cells, to primordial germ cells, to oogonia, to oocytes. * Use transcription factors to reprogram the cells into oocytes (including activating meiosis). Both approaches are valid! The first approach has been very successful
8f511594-03a1-4ddb-bf8e-e3a54e8459e4
StampyAI/alignment-research-dataset/blogs
Blogs
This Museum Does Not Exist: GPT-3 x CLIP --- Table of Contents* [Gallery I](#gallery-i) + [`The Death of Archimedes`](#the-death-of-archimedes) + [`Still Life with Mirror`](#still-life-with-mirror) + [`The Poet's Abbreviated Life`](#the-poets-abbreviated-life) + [`Narcissus`](#narcissus) + [`Dream of the Last Supper`](#dream-of-the-last-supper) * [Gallery II](#gallery-ii) + [`The Coffin of Salvador Dali`](#the-coffin-of-salvador-dali) + [`The Beautiful Bird Delirium`](#the-beautiful-bird-delirium) + [`The Domain of Unimaginable Horror`](#the-domain-of-unimaginable-horror) + [`The Spectral Museum`](#the-spectral-museum) + [`The Metaphysics of Transvestism`](#the-metaphysics-of-transvestism) * [Gallery III](#gallery-iii) + [`The Children of Marx and Coca-Cola`](#the-children-of-marx-and-coca-cola) + [`Man and Bottle`](#man-and-bottle) + [`The Man Who Invented the Photography of the Absurd`](#the-man-who-invented-the-photography-of-the-absurd) + [`Playing in the Graveyard of Avant-Garde`](#playing-in-the-graveyard-of-avant-garde) * [Gallery IV](#gallery-iv) + [`Cranial Extraction of the Baby Alien`](#cranial-extraction-of-the-baby-alien) + [`The Euphoria of the Fish`](#the-euphoria-of-the-fish) + [`Waste of Shame`](#waste-of-shame) + [`The Dream of the Butterfly-Impaled Schoolgirl`](#the-dream-of-the-butterfly-impaled-schoolgirl) * [Gallery V](#gallery-v) + [`The Sleep of Reason Produces Monsters`](#the-sleep-of-reason-produces-monsters) + [`The Failure of Astronauts`](#the-failure-of-astronauts) + [`Washerwoman of the Wind`](#washerwoman-of-the-wind) + [`The Death of the Lonesome Astronomer`](#the-death-of-the-lonesome-astronomer) * [Gallery VI: series](#gallery-vi-series) + [`The Pathological Museum in the Forest`](#the-pathological-museum-in-the-forest) + [`Variations on Narcissus`](#variations-on-narcissus) + [`The Tragic Intimacy of the Eternal Conversation With Oneself`](#the-tragic-intimacy-of-the-eternal-conversation-with-oneself) --- I had GPT-3 generate painting titles (credit to @nmkd of EleutherAI for the idea), beginning with the prompt ``` The hall was lined with an infinite number of paintings, each more surreal and mysterious than the last. The first painting is named "Persistence of Memory." It depicts a surreal landscape with melted clocks draped over strange objects. The next painting is named " ``` After this prompt yielded several intriguing titles, I switched to prompt format which put the titles in a list: ``` 100 surreal and mysterious painting names: Persistence of Memory The Great Masturbator Boot Poem to the Sun The Man Who Envied Cephalopods The Sleep of Reason Produces Monsters Washerwoman of the Wind Man and Bottle Spectrum The Disintegration of the Persians The Great Masturbator, Part II Still Life with Mirror Bouquet of Enigmatic Beauties A Grudge The Premonition of Civil Violence ``` The reason I didn’t use a list immediately is because this format is liable to derail or be repetitive if there aren’t many examples, and the narrative context also encouraged less generic and more interesting results. Once I had a few examples representing the sort of varied weirdness I was looking for, I was able to switch over to the more convenient list format. Now I cultivate a multiverse of surreal and mysterious painting titles using the [loom](/posts/loom-interface-to-the-multiverse/), and periodically harvest the most interesting ones to promote to graphic actuality via BigSleep. ![painting multiverse](/museum/multiverse.png) *a small subtree of the multiverse of surreal and mysterious paintings* Images were generated using [BigSleep](https://github.com/lucidrains/big-sleep) via [The Big Sleep Customized NMKD Public](https://colab.research.google.com/drive/1Q2DIeMqYm_Sc5mlurnnurMMVqlgXpZNO?usp=sharing) colab notebook. --- Gallery I --------- ### `The Death of Archimedes` ![The Death of Archimedes](/museum/arch.png) ### `Still Life with Mirror` ![Still Life with Mirror](/museum/slm.png) ### `The Poet's Abbreviated Life` ![The Poet’s Abbreviated Life](/museum/pab.png) ### `Narcissus` ![Narcissus](/museum/narcissus2.png) ### `Dream of the Last Supper` ![Dream of the Last Supper](/museum/lastsupper.png) Gallery II ---------- ### `The Coffin of Salvador Dali` ![The Coffin of Salvador Dali](/museum/coffin.png) ### `The Beautiful Bird Delirium` ![The Beautiful Bird Delirium](/museum/bbd2.png) ### `The Domain of Unimaginable Horror` ![The Domain of Unimaginable Horror](/museum/doh.png) ### `The Spectral Museum` ![The Spectral Museum](/museum/tsm.png) ### `The Metaphysics of Transvestism` ![The Metaphysics of Transvestism](/museum/metatrans.png) Gallery III ----------- ### `The Children of Marx and Coca-Cola` ![The Children of Marx and Coca-Cola](/museum/mcc.png) ### `Man and Bottle` ![Man and Bottle](/museum/manbottle.png) ### `The Man Who Invented the Photography of the Absurd` ![The Man Who Invented the Photography of the Absurd](/museum/absurd.png) ### `Playing in the Graveyard of Avant-Garde` ![Playing in the Graveyard of Avant-Garde](/museum/pga.png) Gallery IV ---------- ### `Cranial Extraction of the Baby Alien` ![Cranial Extraction of the Baby Alien](/museum/cba.png) ### `The Euphoria of the Fish` ![The Euphoria of the Fish](/museum/fish.png) ### `Waste of Shame` ![Waste of Shame](/museum/waste.png) ### `The Dream of the Butterfly-Impaled Schoolgirl` ![The Dream of the Butterfly-Impaled Schoolgirl](/museum/butterfly.png) Gallery V --------- ### `The Sleep of Reason Produces Monsters` ![The Sleep of Reason Produces Monsters](/museum/sleep.png) ### `The Failure of Astronauts` ![The Failure of Astronauts](/museum/fa.png) ### `Washerwoman of the Wind` ![Washerwoman of the Wind](/museum/washerwoman.png) ### `The Death of the Lonesome Astronomer` ![The Death of the Lonesome Astronomer](/museum/DLA.png) Gallery VI: series ------------------ ### `The Pathological Museum in the Forest` ![The Pathological Museum in the Forest](/museum/tpm.png) ![The Pathological Museum in the Forest](/museum/tpm2.png) ### `Variations on Narcissus` ![Variations on Narcissus](/museum/variations3.png) ![Variations on Narcissus](/museum/variations1.png) ![Variations on Narcissus](/museum/variations2.png) ### `The Tragic Intimacy of the Eternal Conversation With Oneself` ![The Tragic Intimacy of the Eternal Conversation With Oneself](/museum/tragic.png) ![The Tragic Intimacy of the Eternal Conversation With Oneself](/museum/tragic3.png) ![The Tragic Intimacy of the Eternal Conversation With Oneself](/museum/tragic2.png)
015bff50-017e-4511-9557-9b98e27c0c4c
trentmkelly/LessWrong-43k
LessWrong
Is my view contrarian? Previously: Contrarian Excuses, The Correct Contrarian Cluster, What is bunk?, Common Sense as a Prior, Trusting Expert Consensus, Prefer Contrarian Questions. Robin Hanson once wrote: > On average, contrarian views are less accurate than standard views. Honest contrarians should admit this, that neutral outsiders should assign most contrarian views a lower probability than standard views, though perhaps a high enough probability to warrant further investigation. Honest contrarians who expect reasonable outsiders to give their contrarian view more than normal credence should point to strong outside indicators that correlate enough with contrarians tending more to be right. I tend to think through the issue in three stages: 1. When should I consider myself to be holding a contrarian[1] view? What is the relevant expert community? 2. If I seem to hold a contrarian view, when do I have enough reason to think I’m correct? 3. If I seem to hold a correct contrarian view, what can I do to give other people good reasons to accept my view, or at least to take it seriously enough to examine it at length? I don’t yet feel that I have “answers” to these questions, but in this post (and hopefully some future posts) I’d like to organize some of what has been said before,[2] and push things a bit further along, in the hope that further discussion and inquiry will contribute toward significant progress in social epistemology.[3] Basically, I hope to say a bunch of obvious things, in a relatively well-organized fashion, so that less obvious things can be said from there.[4] In this post, I’ll just address stage 1. Hopefully I’ll have time to revisit stages 2 and 3 in future posts.   IS MY VIEW CONTRARIAN? WORLD MODEL DIFFERENCES VS. VALUE DIFFERENCES Is my effective altruism a contrarian view? It seems to be more of a contrarian value judgment than a contrarian world model,[5] and by “contrarian view” I tend to mean “contrarian world model.” Some apparently contrarian v
5af1adcc-a69d-4cc4-a0cb-5a57c7e2cc05
StampyAI/alignment-research-dataset/lesswrong
LessWrong
The Isolation Assumption of Expected Utility Maximization In this short essay I will highlight the importance of what I call the “isolation assumption” in expected utility theory. It may be that this has already been named in the relevant literature and I just don’t know it. I believe this isolation assumption is both important to decision-making about doing good and often ignored. Expected utility theory is here taken as a normative theory of practical rationality. That is, a theory about what is rational to choose given one’s ends ([Thoma 2019](https://www.journals.uchicago.edu/doi/abs/10.1086/699256?journalCode=et), 5). Expected utility theory is then the decision theory that says that the best way for an agent to pursue her goals is to choose so as to maximize expected utility. By utility, I mean not some concept akin to happiness or wellbeing but a measure that represents how much an agent *prefers* an outcome. For example, for an altruist, having a child not die from drowning in a pond may have significantly higher utility than dining out at a fancy restaurant. The term “expected” comes from probability theory. It refers to the sum of the products of the probability and value of each outcome. Expected utility is then a property of options in decisions. Say an agent has two options for lunch and the single thing this agent has preferences over is how her lunch goes today. Option A is to eat a veggie burger, which will bring this agent 10 “utils” for certain. Then, Option A has expected utility of 10. Option B, however, is a raffle in which the agent either gets a really fancy clean-meat burger with probability 0.1 or nothing with probability 0.9. If the agent values the clean-meat burger at 20 utils, and not eating lunch at 0, then the expected utility of Option B has 0.1\*20 + 0.9\*0 = 2 expected utility. I currently think expected utility theory is reasonable as a theory of practical rationality. Brian Tomasik has proposed a compelling thought experiment for why that is. Consider the following scenario: > suppose we see a number of kittens stuck in trees, and we decide that saving some number n of kittens is n times as good as saving one kitten. Then, if we are faced with the choice of either saving a single kitten with certainty or having a 50-50 shot at saving three kittens (where, if we fail, we save no kittens), then we ought to try to save the three kittens, because doing so has expected value 1.5 (= 3\*0.5 + 0\*0.5), rather than the expected value of 1 (= 1\*1) associated with saving the single kitten. ([Tomasik 2016](https://reducing-suffering.org/why-maximize-expected-value/)). In this case, you may have an instinct that it makes more sense to save the single kitten, since this is the only way to guarantee on life is saved. Yet, Tomasik provides a nice line of reasoning for why you should instead maximize expected utility: > Suppose you're one of the kittens, and you're deciding whether you want your potential rescuer to save one of the three or take a 50-50 shot at saving all three. In the former case, the probability is 1/3 that you'll be saved. In the latter case, the probability is 1 that you'll be saved if the rescuer is successful and 0 if not. Since each of these is equally likely, your overall probability of being saved is (1/2)\*1 + (1/2)\*0 = 1/2, which is bigger than 1/3. ([Tomasik 2016](https://reducing-suffering.org/why-maximize-expected-value/)) So, I’ve attempted to make the case for why expected utility theory makes sense. Now I will get to my point that we should be careful not to misuse it. I will thus try to make the case for the importance of what I call the “isolation assumption” and for how easy it is for it to be dangerously ignored. First, let’s get a bit deeper in the point of expected utility theory. As I said above, this is a theory about how to best go about achieving one’s ends. Let’s suppose our ends are mostly about “making the most good”. Then, especially if we are aspiring [Effective Altruists](https://www.effectivealtruism.org/?gclid=CjwKCAjwjqT5BRAPEiwAJlBuBTcpQ-oLYjuUx3jE6PkvBHAPm1dqNkWOZwXDIW9tiZ1zsbeuJLqnRRoCfgEQAvD_BwE), we ideally want to maximize the expected utility of all of the relevant consequences of our actions. I say this in contrast to merely maximizing the expected utility of the *immediate* consequences of our actions. Notice, however, that scenarios that are commonly discussed when talking about decision theory, such as the one involving kittens above, are focused on the immediate consequences. What is important, then, is that we don’t forget that consequences which are not immediate often matter, and sometimes matter significantly more than the immediate consequences. This then gives rise to the assertion I want to make clear: that we can only apply expected utility theory when we are justified in assuming that the values associated with the outcomes in our decision problem encompass all of the difference in value in our choice problem. Another way of putting this is to say that the future (beyond the outcomes we are currently considering) is isolated from the outcomes we are currently considering. Yet another way to put this is that the choice we currently face affects nothing but the prospects that we are taking into account. Notice how this point is even more important if we are [longtermists](https://forum.effectivealtruism.org/posts/qZyshHCNkjs3TvSem/longtermism). Longtermism entails that consequences matter regardless of when they happen. This means we care about consequences extending as far into the future as the end of history. Then, if we are to choose by maximizing expected utility, we must be able to assume that whatever outcomes we are considering, choosing one way or another does not negatively affect what options are available in the rest of history. Here is an example to illustrate my point, this time adapted and slightly modified from a [thought experiment](https://reducing-suffering.org/why-maximize-expected-value/) provided by Tomasik: Suppose (if necessary) that you are an altruist. Now assume (because it will make it easier to make my point) that you are a longtermist. Suppose you are the kind of longtermist that thinks it is good if people who will lead great lives are added to the world, and bad if such great lives are prevented from existing. Suppose that there are 10,000 inhabitants in an island. This is a special island with a unique culture in which everyone is absurdly happy and productive. We can expect that if this culture continues into the future, many of the most important scientific discoveries of humanity will be made by them. However, all of the inhabitants recently caught a deadly disease. You, the decision maker, has two options. Drug A either saves all of the islanders with 50% chance or saves none of them with 50% chance. Drug B saves 4,999 of them with complete certainty. If we consider only the survival of the inhabitants, the expected utility of Drug A is higher (10,000\*0.5 + 0\*0.5 = 5,000 > 4,999). However, saving these inhabitants right now is not the only thing you care about. As a longtermist, you care about all potential islanders that could exist in the future. This implies the appropriate expected utility calculation includes more than what we have just considered. Suppose that combining the value of the scientific discoveries these islanders would make and the wellbeing their future descendants would experience if this civilization continues is worth 1,000,000 utils. Then, the complete expected utility of Drug B is the value of each life saved directly (4,999) plus the 1,000,000 of the continuation of this civilization (total = 1,004,999). The expected utility of Drug A, however, is merely (10,000 + 1,000,000)\*0.5 + 0\*0.5 = 505,000. So, Drug B is now the one with highest expected value. I hope this makes clear how if you can’t assume that the choices in the present do not affect the long-term consequences, you cannot use expected value! As I understand it, the upshot is the following. If you make a decision based on maximizing expected utility, you have two possibilities. You can incorporate all the relevant consequences (which may extend until the end of human civilization). Or you have to be able to assume that the value of the consequences that you are not considering do not change which of your current options is best. However, it seems to me now that the only way you can assume a set of consequences of your choice won’t affect which option is better is if you know this set is isolated from the current choice. Otherwise you would have incorporated these consequences in the decision problem.
2afb728a-d1db-43de-9e27-d4d85bd3d905
trentmkelly/LessWrong-43k
LessWrong
What is the ground reality of countries taking steps to recalibrate AI development towards Alignment first? You can possibly put the lid on European AI research, the biggest sign to this is civilian administration oversight over the internet which is incredibly tight compared to European Union's non authoritarian government model where individual human will is a very little factor compared to institutional decisions. United States? You can decapitate AI, sure. Much more unlikely than Europe (And a Europe wide decapacitation campaign is already very unlikely for lawmakers to wrap their head around). The US with its current edge in development at least showed the most organic development regarding Alignment research. This should make you worried, other countries started from the already developed technological doctrines of America without (usually) adopting the logical doctrines that have led to this state of technology. China didn't need to have Von Neumann and Turing to get to the Baidu search engine. What about Japan, Brazil, South Korea, India, Singapore and Taiwan? Players that don't cross your mind much but usually pass some of Western Europe in terms of behind the scene developments or atleast replication, does MIRI have any reach in Singapore? They sure seem to be very out of the hearing range for pleas coming from Eliezer while also developing parallel and much less safe models. Try to find a native alignment group in Asia, they weren't raised on 100 years of Science Fiction and Ex Machinas (except for Japan and Brazil). These nations will develop AGI reminiscent models with less than half of the care we expect from OpenAI. Most researchers do not expect corrigibility and alignment from OpenAI . What should follow is several "buts" and a "we are increasing global outreach", (well if Singularity is coming in 2029 that's a very tight schedule to from tentative influence in English Speaking world to entire globe) so lets say I accept all the preliminary arguments for why the asian tigers listen to you. Or will start listening to you even with the lack of breakthrough
b1b6aaaa-9d5d-4bc0-a998-a6bac942b7d1
trentmkelly/LessWrong-43k
LessWrong
If I knew how to make an omohundru optimizer, would I be able to do anything good with that knowledge? I'd bet we're going to figure out how to make an omohundro optimiser - a fitness-maximizing AGI - before we figure out how to make AGI that can rescue the utility function, preserve a goal, or significantly optimise any metric other than its own survival, such as paperclip production, or Good. (Arguing for that is a bit beyond the scope of the question, but I know this position has a lot of support already. I've heard Eliezer say, if not this exactly, something very similar. Nick Land especially believes that only the omohundro drives could animate self-improving AGI. I don't think Nick Land understands how agency needs to intercede in prediction - that it needs to consider all of the competing self-fulfilling prophesies and only profess the prophesy it really wants to live in, instead of immediately siding with the prophesy that seems the most hellish, and most easiest to stumble into. The prophesies he tends to choose do seem like the easiest prophesies to stumble into, so he provides a useful service as a hazard alarm, for we who are trying to learn not to stumble)) What would you advise we do, when one of us finds ourselves in the position of knowing how to build an omohundro optimiser? Delete the code and forget it? If we had a fitness-optimising program, is there anything good it could be used for?
bb08245c-a4a3-416e-ae4a-8694b71aca36
trentmkelly/LessWrong-43k
LessWrong
Draft report on existential risk from power-seeking AI I’ve written a draft report evaluating a version of the overall case for existential risk from misaligned AI, and taking an initial stab at quantifying the risk from this version of the threat. I’ve made the draft viewable as a public google doc here (Edit: arXiv version here, video presentation here, human-narrated audio version here). Feedback would be welcome.  This work is part of Open Philanthropy’s “Worldview Investigations” project. However, the draft reflects my personal (rough, unstable) views, not the “institutional views” of Open Philanthropy.
552779a8-51a4-432d-b5a0-53cce72f5714
trentmkelly/LessWrong-43k
LessWrong
[outdated] My current theory of change to mitigate existential risk by misaligned ASI Epistemic status: I'm pretty confident about this; looking for feedback and red-teaming. As of 2023-06-30, I notice multiple epistemological errors in this document. I do not endorse the reasoning in it, and while my object-level claims haven't changed radically, I am in the process of improving them using better epistemological procedures. I might update this after. This post describes my current model of the world and the alignment problem, and my current plan to mitigate existential risk by misaligned ASI given these beliefs, my skills and my situation. It is written to both communicate my plan and to get feedback on it for improvement. Timelines Here are my current beliefs: * Recursive self-improvement (RSI) is a potent attractor in the space of capabilities in an AI systems. Once a system achieves human-in-the-loop RSI (that is, RSI with the help of a research team helping it by providing compute and deploying capabilities improvements it comes up with), it can, within the span of weeks, achieve autonomous RSI (that is, RSI without a human-in-the-loop). * RSI is not bound by feedback loops to the real world. You do not need real world data to improve an AI system's capabilities enough to reach RSI.[1] * The large AI labs (OpenAI, Deepmind, Anthropic) will create sequence-modelling AI systems within the next 2.5 years that will be capable of at least human-in-the-loop RSI. * The first autonomous RSI AI system decides the fate of humanity after it fooms (that is, achieves superintelligence status, where it is significantly smarter and more capable than the entirely of humanity combined). Given the stated beliefs, my current probability distribution for the creation of an artificial superintelligence (ASI) over the next decade is roughly a normal distribution with a mean at 2.5 years from now (that is, around 2025-01-01), and a standard deviation of 0.5.[2] There are two main ways one can delay the creation of a ASI (so we have more time to solve t
b9dfad95-a134-41a5-aa50-a40131e0337f
trentmkelly/LessWrong-43k
LessWrong
What are good models of collusion in AI? I'm working on a paper and accompanying blog post examining theories of collusion in the context of oligopolistic firms in economics, to see what those models would say about AI safety scenarios (e.g. values handshakes, acausal negotiation, etc.). I'm very familiar with the econ literature, but I want to make sure I'm drawing on the state-of-the-art in AI theory as well. Any advice on which sources I should look at?
a319e0c0-c03e-4eb0-92d4-76669fd190cb
trentmkelly/LessWrong-43k
LessWrong
How much might AI legislation cost in the U.S.? This piece was previously published on my Substack. Policymakers are rushing to regulate artificial intelligence (AI), but the economic impact of these regulations remains largely unexplored. While the European Union and the United Kingdom have produced cost estimates, recent developments in the United States offer important new benchmarks. Recent amendments to the California Consumer Privacy Act (CCPA) and regulations implementing President Biden’s Executive Order on AI offer crucial insights into what businesses might expect to pay for compliance. The financial burden could be substantial, running into billions of dollars across the economy. Especially as states push to adopt AI bills, understanding these costs is essential for crafting regulations that balance innovation, safety, and economic viability. Still, these compliance cost estimates are notoriously unreliable. As an alternative approach, I tested whether large language models (LLMs) could provide more realistic estimates by simulating compliance scenarios in the final section of this post. I prompted ChatGPT, Claude, and Grok to act as compliance officers at companies subject to new CCPA provisions and a Bureau of Industry and Security (BIS) rule, asking each to estimate hours needed for first-year implementation and ongoing compliance. The big takeaways: * For California's risk assessment regulation, Claude and Grok project 400-580 hours will be needed for the first-year of compliance (vs. the official 120 hours) and 150-240 hours thereafter (vs. the official 18-36 hours annually). ChatGPT estimates the time at 90-250 hours initially and 40-150 hours for each additional year. * For the automated decision-making provision of the CCPA, Claude and Grok project 450-730 hours for first-year compliance, far exceeding the official 360-hour estimate. While ChatGPT suggests lower initial costs (80-300 hours), all three LLMs predict significantly higher ongoing annual costs than official projections. * For
5607c82d-c74d-4643-84cf-a91d2489aaa2
trentmkelly/LessWrong-43k
LessWrong
What are examples of problems that were caused by intelligence, that couldn’t be solved with intelligence? Like everyone, AI safety has been on my mind a lot lately. I was thinking about how most problems that are caused by intelligence in the world to-date seem to have always be solved by or have the potential to be soluble using more intelligence. While some problems can be massively lethal of course, this doesn’t seem to be the case with the problems that most applied AI safety today seeks to avoid. Rather, the prevalent approach seems to be safety by avoiding even small risks - like not letting a child play for fear it might scratch its knee. In taking this approach, it feels like we’ll almost certainly limit our ability to access and leverage this intelligence to solve many problems. I was curious are there many good examples from the real-world where this hasn’t been the case in other areas - what are meaningful problems that were caused by intelligence, that couldn’t be solved with more intelligence? Even if that wasn’t necessarily done - for economic reasons, etc. For example, intelligence caused carbon emissions - we leveraged it to create such huge levels of industry that we emitted far too much - but intelligence will also almost-certainly be leveraged to solve it - via a variety of human-invented solutions and counter-actions. Are there examples where this wasn’t the case historically?
1e13af08-c35f-4780-8229-474baaa3cd44
trentmkelly/LessWrong-43k
LessWrong
Arbital has been imported to LessWrong Arbital was envisioned as a successor to Wikipedia. The project was discontinued in 2017, but not before many new features had been built and a substantial amount of writing about AI alignment and mathematics had been published on the website. If you've tried using Arbital.com the last few years, you might have noticed that it was on its last legs - no ability to register new accounts or log in to existing ones, slow load times (when it loaded at all), etc.  Rather than try to keep it afloat, the LessWrong team worked with MIRI to migrate the public Arbital content to LessWrong, as well as a decent chunk of its features.  Part of this effort involved a substantial revamp of our wiki/tag pages, as well as the Concepts page.  After sign-off[1] from Eliezer, we'll also redirect arbital.com links to the corresponding pages on LessWrong. As always, you are welcome to contribute edits, especially to stubs, redlinks, or otherwise incomplete pages, though note that we'll have a substantially higher bar for edits to high-quality imported Arbital pages, especially those written in a "single author" voice. New content While Arbital had many contributors, Eliezer was one of the most prolific, and wrote something like a quarter million words across many pages, mostly on alignment-relevant subjects. If you just want to jump into reading, we've curated what we consider to be some of the best selections of that writing. If you really hate clicking links, I've copied over the "Tier 1" recommendations below. Recommendations 1. AI safety mindsetWhat kind of mindset is required to successfully build an extremely advanced and powerful AGI that is "nice"?2.Convergent instrumental strategies and Instrumental pressureCertain sub-goals like "gather all the resources" and "don't let yourself be turned off" are useful for a very broad range of goals and values.3.Context disasterCurrent terminology would call this "misgeneralization". Do alignment properties that hold in one context (e
e464eafc-0fb7-412b-8a9c-c6d46af4e5e1
trentmkelly/LessWrong-43k
LessWrong
[Link] Ideology, Motivated Reasoning, and Cognitive Reflection: An Experimental Study Related to: Knowing About Biases Can Hurt People HT: Marginal Revolution Paper. > Social psychologists have identified various plausible sources of ideological polarization over climate change, gun violence, national security, and like societal risks. This paper reports a study of three of them: the predominance of heuristic-driven information processing by members of the public; ideologically motivated cognition; and personality-trait correlates of political conservativism. The results of the study suggest reason to doubt two common surmises about how these dynamics interact. First, the study presents both observational and experimental data inconsistent with the hypothesis that political conservatism is distinctively associated with closed-mindedness: conservatives did no better or worse than liberals on an objective measure of cognitive reflection; and more importantly, both demonstrated the same unconscious tendency to fit assessments of empirical evidence to their ideological predispositions. Second, the study suggests that this form of bias is not a consequence of overreliance on heuristic or intuitive forms of reasoning; on the contrary, subjects who scored highest in cognitive reflection were the most likely to display ideologically motivated cognition. These findings corroborated the hypotheses of a third theory, which identifies motivated cognition as a form of information processing that rationally promotes individuals’ interests in forming and maintaining beliefs that signify their loyalty to important affinity groups. The paper discusses the normative significance of these findings, including the need to develop science communication strategies that shield policy-relevant facts from the influences that turn them into divisive symbols of identity.
9d9946bb-98d4-45bf-ae7b-138b27dc0ff3
StampyAI/alignment-research-dataset/special_docs
Other
Non-pharmacological cognitive enhancement Front Syst Neurosci. 2014; 8: 107. Published online 2014 Jun 11. doi: [10.3389/fnsys.2014.00107](//doi.org/10.3389%2Ffnsys.2014.00107)PMCID: PMC4052735PMID: [24999320](https://pubmed.ncbi.nlm.nih.gov/24999320)Pharmacological cognitive enhancement—how neuroscientific research could advance ethical debate =============================================================================================== [Hannah Maslen](https://pubmed.ncbi.nlm.nih.gov/?term=Maslen%20H%5BAuthor%5D),1,\\* [Nadira Faulmüller](https://pubmed.ncbi.nlm.nih.gov/?term=Faulmüller%20N%5BAuthor%5D),2,3 and [Julian Savulescu](https://pubmed.ncbi.nlm.nih.gov/?term=Savulescu%20J%5BAuthor%5D)4### Hannah Maslen 1Oxford Martin School, University of Oxford, Oxford, UK Find articles by [Hannah Maslen](https://pubmed.ncbi.nlm.nih.gov/?term=Maslen%20H%5BAuthor%5D)### Nadira Faulmüller 2Department of Experimental Psychology, University of Oxford, Oxford, UK 3Department Values, Technology and Innovation, Delft University of Technology, Delft, Netherlands Find articles by [Nadira Faulmüller](https://pubmed.ncbi.nlm.nih.gov/?term=Faulmüller%20N%5BAuthor%5D)### Julian Savulescu 4Oxford Uehiro Centre for Practical Ethics, University of Oxford, Oxford, UK Find articles by [Julian Savulescu](https://pubmed.ncbi.nlm.nih.gov/?term=Savulescu%20J%5BAuthor%5D)[Author information](#) [Article notes](#) [Copyright and License information](#) [Disclaimer](/pmc/about/disclaimer/)1Oxford Martin School, University of Oxford, Oxford, UK2Department of Experimental Psychology, University of Oxford, Oxford, UK3Department Values, Technology and Innovation, Delft University of Technology, Delft, Netherlands4Oxford Uehiro Centre for Practical Ethics, University of Oxford, Oxford, UKEdited by: Mikhail Lebedev, Duke University, USAReviewed by: Elisabeth Hildt, University of Mainz, Germany; Brendon Boot, Harvard University Medical School, USA; Patricia Anne O'Malley, Miami Valley Hospital Center of Nursing Excellence, USA\\*Correspondence: Hannah Maslen, Oxford Martin School, University of Oxford, 34 Broad Street, Oxford, OX1 3BD, UK e-mail: [ku.ca.xo.yhposolihp@nelsam.hannah](mailto:dev@null)This article was submitted to the journal Frontiers in Systems Neuroscience.Received 2014 Jan 31; Accepted 2014 May 20.[Copyright](/pmc/about/copyright/) © 2014 Maslen, Faulmüller and Savulescu.This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.Abstract -------- There are numerous ways people can improve their cognitive capacities: good nutrition and regular exercise can produce long-term improvements across many cognitive domains, whilst commonplace stimulants such as coffee temporarily boost levels of alertness and concentration. Effects like these have been well-documented in the medical literature and they raise few (if any) ethical issues. More recently, however, clinical research has shown that the off-label use of some pharmaceuticals can, under certain conditions, have modest cognition-improving effects. Substances such as methylphenidate and modafinil can improve capacities such as working memory and concentration in some healthy individuals. Unlike their more mundane predecessors, these methods of “cognitive enhancement” are thought to raise a multitude of ethical issues. This paper presents the six principal ethical issues raised in relation to pharmacological cognitive enhancers (PCEs)—issues such as whether: (1) the medical safety-profile of PCEs justifies restricting or permitting their elective or required use; (2) the enhanced mind can be an “authentic” mind; (3) individuals might be coerced into using PCEs; (4), there is a meaningful distinction to be made between the treatment vs. enhancement effect of the same PCE; (5) unequal access to PCEs would have implications for distributive justice; and (6) PCE use constitutes cheating in competitive contexts. In reviewing the six principal issues, the paper discusses how neuroscientific research might help advance the ethical debate. In particular, the paper presents new arguments about the contribution neuroscience could make to debates about justice, fairness, and cheating, ultimately concluding that neuroscientific research into “personalized enhancement” will be essential if policy is to be truly informed and ethical. We propose an “ethical agenda” for neuroscientific research into PCEs. \*\*Keywords:\*\* cognitive enhancement, brain function augmentation, ethics, modafinil, ritalin, justice, cheating, personalized enhancementIntroduction ------------ Recent research in neuroscience and pharmacology has demonstrated that various pharmaceuticals can have modest cognition-enhancing effects in healthy individuals (for reviews, see Repantis et al., [2010](#B47); Husain and Mehta, [2011](#B28)). For example, some studies have shown that modafinil—originally developed for the treatment of narcolepsy—can improve various dimensions of cognitive function in sleep-deprived (Wesensten et al., [2005](#B63); Thomas and Kwong, [2006](#B59)) and non-sleep-deprived healthy adults (Turner et al., [2003](#B61); Müller et al., [2004](#B41)). Similarly, methylphenidate—originally developed for the treatment of Attention Deficit Hyperactivity Disorder (ADHD)—has been shown to improve spatial working memory and planning in healthy adults (Elliott et al., [1997](#B19); Mehta et al., [2000](#B35)). Unlike the more mundane methods for improving cognitive function—such as exercise and good nutrition (Dresler et al., [2012](#B17))—these pharmaceutical cognitive enhancers (PCEs) are thought to raise a host of ethical issues for individuals and society (Greely et al., [2008](#B25); Bostrom and Sandberg, [2009](#B8)). At the individual level, concerns are raised about medical safety and side effects, the authenticity of the enhanced mind and the value of achievements facilitated by pharmaceutical intervention. At the societal level, ethical questions can be asked about whether the availability of PCEs would increase or undermine equality, and about whether individuals will be directly or indirectly coerced into using PCEs. Further normative questions emerge particularly in the healthcare setting: should we be drawing a sharp line between treatment and enhancement and should individuals be given access to PCEs through medical professionals? In this paper, we outline the key issues at stake in the normative debate about pharmacological cognitive enhancement (PCE) and, for each issue, suggest the contribution that neuroscientific research could make. The greatest contribution will be made to the discussions surrounding the safety and efficacy of PCEs. Although the question of what harms are worth risking in the pursuit of certain benefits is to a large extent normative, the dearth of evidence about the effectiveness and safety of PCEs in real-world contexts renders the discussion mostly hypothetical at this point. More research on the risks of dependency is also urgently needed. Data of this kind will be crucial for discussions about regulation, and for debates about the permissibility of requiring or encouraging people to use PCEs. In addition to the contribution neuroscience will make to understanding the risk-benefit profiles of PCEs, we suggest that a more nuanced understanding of the neural systems affected by different substances will enrich the debate about whether PCE use constitutes cheating. Also related to cheating, we further suggest that the neuroscientific evidence on the functional trade-offs precipitated by some PCE adds an important dimension to the debate about whether achievements facilitated by PCEs should be seen to be effortless and involve little sacrifice. Drawing together our conclusions, we propose an “ethical agenda” for future neuroscientific research on PCE. This agenda sets out what sort of research would help move the ethical debates forward, and why. Resolving these debates will be crucial for ensuring that society responds to the increasing use of PCE in the most responsible, fair and rational way. For a summary of our “ethical agenda” for neuroscientific research, see Table [​Table11](/pmc/articles/PMC4052735/table/T1/). ### Table 1 \*\*Summary of ethical agenda for neuroscientific research\*\*. | \*\*Suggested type of study\*\* | \*\*Advancement in ethical debate\*\* | | --- | --- | | Longitudinal studies investigating the long-term safety profile of PCEs | This is perhaps the most pressing task for neuroscientists. The long-term, real-world safety profile of PCEs is of considerable import to potential users and to all debates about PCE ethics and policy. In relation to the latter concerns, longitudinal studies will advance ethical debates about: (1) whether PCEs should be placed on the open market for enhancement purposes (and with what restrictions), and (2) whether employees doing particular types of jobs can legitimately be required to take PCEs | | Identification of pathology associated with mental or psychiatric disorders or limitations to enable classificatory separation of conditions which are diseases from those which constitute normal human variation | Will advance the ethical debate about whether the administration and effects of particular PCEs constitute treatment or enhancement, and how resources should be deployed accordingly | | Identification of the effects of PCEs in targeted and specified populations of ethical significance, such as those who are worst off. In particular, further research into the baseline effect should be conducted | Will advance the debate about distributive justice and access to PCEs. If PCEs have differential effects on those who are already worst off, this will be highly relevant to their permissibility and just distribution | | More precise distinction between the different cognitive effects of different PCEs | Will (1) be of central relevance to whether certain putative PCEs will be used for enhancement and, if so, in which contexts and (2) advance the debate on cheating in competitive contexts: some effects (e.g., creativity) might be considered more unfair than others (e.g., wakefulness) and enhancing motivation vs. enhancing effectiveness might be considered relevant to the value of any resulting achievements | | Investigation of the functional trade-offs associated with different PCEs | Will (1) be of central relevance to whether certain putative PCEs will be used for enhancement and, if so, in which contexts and will (2) advance the debate about the nature of the sacrifice possibly required for achievements to have value. It will also (3) advance the debate about the practicality and legitimacy of requiring certain people to take PCEs | | Pursuit of a “personalized enhancement” approach to bring us closer to understanding what effect any particular PCE will have in any particular person | Will be relevant to many (if not all) ethical debates and policy considerations including: (1) whether particular people could legitimately be required to take PCEs in certain contexts, (2) who should be given priority access to which PCEs, (3) whether unequal effects have ramifications for cheating. Only when we can predict the \*personal\* benefits and costs of enhancement can policy be truly informed and ethical | [Open in a separate window](/pmc/articles/PMC4052735/table/T1/?report=objectonly)Overview of pharmacological cognitive enhancement ------------------------------------------------- What it means to “enhance” is notoriously difficult to pin down. To enhance is essentially to improve or increase, but what this improvement must be relative to is not obvious. On the broadest definitions of enhancement, some capacity is enhanced if it is improved relative to its prior level of functioning such that it increases the individual's chances of leading a good life—enhancement thus occurs regardless of how well- or poorly-functioning the capacity originally was (Savulescu et al., [2011](#B53)). On more restrictive definitions of enhancement, a capacity is enhanced if it is improved beyond a particular point—perhaps a species mean or agreed “normal” level of functioning (c.f. Sabin and Daniels, [1994](#B49)). Others define enhancement as any improvement which goes beyond correcting pathology. For example: “A cognitively enhanced person [… ] is not necessarily somebody with particularly high (let alone super-human) cognitive capacities. A cognitively enhanced person, rather, is somebody who has benefited from an intervention that improves the performance of some cognitive subsystem without correcting some specific, identifiable pathology or dysfunction of that subsystem” (Bostrom and Sandberg, [2009](#B8)). In this paper, we adopt the broader understanding of cognitive enhancement. We do this in part because the substances currently available and likely to be available in the near future effect only modest improvements (Husain and Mehta, [2011](#B28)), but also because we believe that any line intended to mark the point at which an improvement counts as enhancement necessarily involves a value judgement involving normative (ethical) considerations. Most of the substances cited as putative PCEs were originally developed for clinical use, to treat conditions that are at least partly characterized by some observable cognitive defect. Here, again, it is sometimes difficult to decide what should count as a cognitive \*defect\*. However, in the case of defective or deficient capacities, decisions must be made about where to place the line to determine who should receive medical attention and resources. For example, two of the substances receiving the most attention from those interested in enhancement—methylphenidate and modafinil—were originally developed to treat the symptoms of ADHD and narcolepsy, respectively. More recently, however, these substances have been used off-label by healthy individuals to improve their memories, level of alertness, or powers of concentration (e.g., Maher, [2008](#B34)). Other substances with some modest enhancing effects on cognition include donepezil, dopamine agonists (such as d-amphetamine, bromocriptine, and pergolide), guanfacine, atomoxetine, reboxetine, galantamine, rivastigmine, and memantine. Working pharmacologically in different ways, these substances have been shown to improve cognitive functions such as response inhibition, working memory, episodic memory, attention, vigilance, and incidental learning (see de Jongh et al., [2008](#B15); Lanni et al., [2008](#B33); Husain and Mehta, [2011](#B28)). However, this limited evidence of effectiveness should be cautiously considered alongside studies producing null results and some evidence of task-specific impairments (see Hall and Lucke, [2010](#B27) and Advokat, [2010](#B2) for less optimistic reviews of the scientific literature on PCE). The prospect of being able to enhance any of these cognitive functions probably would be attractive to many individuals. Whether the goal of such enhancement would be to perform better at work, to learn a skill or language quicker, to decrease the need for rest in leisure time, or even just to experience one's mind as “sharper,” improving cognition would presumably come with many benefits. Data from various prevalence studies indicate that there are groups of individuals who use some of the substances listed above for purposes of studying, to combat jet-lag or even to facilitate completion of household chores (for a review of student uses, see Smith and Farah, [2011](#B57); see also Maher, [2008](#B34)). Whilst the neuroscientific literature is reporting some modest enhancement effects of these substances on the cognition of healthy individuals (c.f. Husain and Mehta, [2011](#B28)), the ethical literature has been raising and responding to a variety of issues pertaining to their use (for overview see Greely et al., [2008](#B25); Bostrom and Sandberg, [2009](#B8)). Some of these issues are practical, some socio-political and others relate to the individual user. The overarching goal is to ascertain how permissible and how moral PCE use is and how society and regulatory bodies should respond to it. Although the ethical debate is principally a normative enterprise, it cannot reach firm conclusions about how to proceed based purely on hypothetical reasoning and untutored speculation: it must be informed by neuroscientific research providing the empirical facts about PCEs. In what follows, we outline the key issues in the enhancement debate, emphasizing where we think neuroscientific research might have particular importance for the normative debate. Ethical debate and the relevance of neuroscientific research ------------------------------------------------------------ ### Medical safety and effectiveness In many ethical discussions of cognitive enhancers the first issue to be raised (often to be set aside so that there can be any further discussion at all) is whether cognitive enhancers are \*medically safe\* to use. Since there are no longitudinal studies yet examining the long-term use of pharmaceuticals such as modafinil and methylphenidate, some authors argue that we currently do not know enough about the potential dangers and that the availability and use of PCEs should be avoided on this basis (e.g., Drabiak-Syed, [2011](#B16); Boot et al., [2012](#B7)). Despite the huge interest in PCE from philosophers and scientists, the evidence of their \*effectiveness\* is still inconclusive. Moreover, where there is evidence of enhancement effects, they often tend to be limited to improvements on specific tasks, are only seen at certain dosages and are not observed in all people (Ragan et al., [2013](#B45); Farah et al., [2014](#B21)). Crucially, it must be remembered that the degree and nature of any cognitive improvements will be different for each PCE and so no sweeping claims should be made about the effectiveness of PCEs in general. In terms of both effectiveness and safety, it should also be noted that short term studies carried out in laboratory settings are not representative of long term use in real world contexts. In his meta-analysis of randomized controlled trials of methyphendidate, Repantis et al. ([2010](#B47)) found a significant improvement in the long-term memory of healthy participants, particularly when there was a longer interval between the learning phase and recall. However, the meta-analysis revealed no significant improvements in attention, mood or executive functions. Similar findings emerged from Farah et al.'s ([2014](#B21)) review of more than fifty experiments on the effects of amphetamine and methylphenidate: they found convincing evidence of an enhancing effect of stimulants on learning under some circumstances, specifically when the retention interval between study and test was longer than an hour, but not at shorter intervals. They also concluded that the evidence for improvement of executive functions was much less clear. There is some evidence to suggest that the effects of methylphenidate on cognitive control are only significantly positive in participants whose performance on placebo was lowest (Smith and Farah, [2011](#B57)). In relation to the effectiveness of modafinil, Farah et al.'s ([2014](#B21)) recent review of single dose studies of modafinil concluded that there is clear evidence of enhanced executive function and memory for sleep-deprived individuals but, for rested adults, whilst there were some positive findings for specific tasks such as those requiring inhibitory control, there were also a large number of null results and the occasional finding of impairment. They refer to this pattern—of limited improvements on some specific tasks and impairment on others—as being “familiar” for PCEs. There are also some reviews of the effectiveness of anti-dementia medications for cognitive enhancement. These include acetylcholinesterase inhibitors such as donepezil, rivastigmine, and galantamine. A review conducted by Repantis ([2013](#B46)) concluded that the few existing studies of effects in healthy participants provide no consistent evidence for a neuroenhancement effect. In the case of, donepezil there was some evidence to suggest improvements on retention of training on complex aviation tasks (Yesavage et al., [2002](#B65)), improvements in verbal memory and episodic memory (Gron et al., [2005](#B26)). However, other studies showed no or limited effects on memory and attention and two others showed transient impairment of episodic memory (Beglinger et al., [2004](#B4), [2005](#B5)). The same pattern of results suggesting enhancement in some cases but no effect or even impairment in others can be seen for donepezil. Further, a review of the efficacy of these putative cognitive enhancers for patients with mild cognitive impairment concluded that they did not improve cognition or function among patients with low-level impairment (Tricco et al., [2013](#B60)). The \*medical safety\* of PCEs varies from substance to substance, and side effects relate not only to the direct pharmacological effects but also to broader psychological and physiological changes. The review conducted by Repantis ([2013](#B46)) concluded that in the majority of trials, the drugs were well tolerated. However, side effects were noted. In relation to methylphenidate, side effects included increased heart rate and some instances of increases in blood pressure. Headaches, anxiety, nervousness, dizziness, drowsiness, and insomnia were also typical complaints. Repantis ([2013](#B46)) summarizes similar side effects for modafinil, where adverse reactions included headache, dizziness, gastrointestinal complaints (e.g., nausea, abdominal pain, dry mouth), increased diuresis, palpitations, nervousness, restlessness, and sleep disturbances and insomnia (especially in studies with non-sleep deprived individuals). In their recent review, Ragan et al. ([2013](#B45)) highlight the fact that modafinil was reviewed by the European Medicines Agency ([2010](#B20)), who concluded that it should not be prescribed for obstructive sleep apnea, shift-work sleep disorder, and idiopathic hypersomnia because of the risks of serious skin reaction, suicidality, depression, psychosis, and adverse cardiovascular events. In relation to anti-dementia drugs, Repantis ([2013](#B46)) concluded that, in the majority of the trials in healthy adults, donepezil was well tolerated. However, some side effects were reported in some participants, including gastrointestinal complaints (e.g., nausea), headaches, dizziness, nightmares, and insomnia. The meta analysis of anti-dementia drugs for people with mild cognitive impairment (Tricco et al., [2013](#B60)) revealed that patients taking these medications experienced significantly more nausea, diarrhea, vomiting, and headaches than patients taking placebo. The authors also suggest that patients taking these medications might be at greater cardiac risk, with one study finding a higher incidence of bradycardia among patients who received galantamine. As Farah et al. ([2014](#B21)) emphasize, there is another type of risk that should not be ignored in a consideration of the safety of PCEs. Many pharmaceuticals, especially stimulants, present a risk of dependence. The authors cite a nationwide survey analyzed by Kroutil et al. ([2006](#B32a)) which estimates that almost one in twenty nonmedical users of prescription stimulants meet the criteria for dependence or abuse (For further discussion of the potential for addiction in student populations see Outram, [2010](#B42) and White et al., [2006](#B64)). Finally, as Ragan et al. ([2013](#B45)) point out, there is no such thing as a completely safe drug, only a drug whose benefits outweigh its drawbacks. However, it is also worth emphasizing that, even if there are long-term risks associated with these substances, this does not (by itself) mean that they should automatically be prohibited. There are serious risks associated with many activities that the state permits because it is believed that individuals should decide for themselves whether these risks are worth taking. Dangerous sports and cosmetic surgery both come with risks, but the value some individuals attach to the respective sporting experiences and cosmetic effects justifies giving these individuals the choice to take risks in their pursuit. This caveat notwithstanding, and taking into account potential costs to the healthcare system, greater knowledge about safety and efficacy will allow regulators to decide whether the decision about which risks are worth taking should be put in the hands of consumers (for a detailed discussion of the way risks and benefits should be assessed for cognitive enhancement devices, such as brain stimulators, see Maslen et al., [2014](#B39)). The ethical debate about the level of risk consumers should be allowed to take is of great practical importance when it comes to making policy recommendations. In addition, the question of whether the harms of a certain PCE outweigh its benefits will be important to discussions about the permissibility of requiring individuals to use PCEs and about the possible need to protect individuals from pressure to take any of the substances under discussion. Finally, the empirical project of identifying the different effects PCEs have across a different individuals (c.f. Husain and Mehta, [2011](#B28)) is likely to feed into the normative debate about which effects (for which individuals) constitute a form of treatment and which effects (for which individuals) constitute enhancement. We discuss these and other ethical issues in what follows. ### Authenticity and naturalness There are a bundle of related ethical issues that are sometimes raised under the broad heading of \*authenticity\* (see Bublitz and Merkel, [2009](#B9); Juth, [2011](#B30)). Some of these pertain to numerical personal identity—do individuals become categorically different persons when they transform themselves via enhancement? (DeGrazia, [2005](#B14))—some consider less drastically what it is for an individual to be to be more or less his or her “real” self (The President's Council on Bioethics, [2003](#B44)), and other ethical concerns pertain to what it is to be, and function as, a human being (Kass, [2003](#B32)). The principal tenet underlying authenticity objections against the use of PCEs is that individuals are most themselves when they are in their “natural,” unaltered state. If capacities and characteristics fundamental to one's identity are changed, then the individual is recast as an altered or inauthentic person (e.g., Elliott, [1999](#B18)). This argument is premised on the idea that there is a “real,” true self, and that this real self is to be preserved as much as possible. However, this assumption can be challenged: individuals often (and understandably) try to improve themselves in ways that allow them to more successfully achieve their goals. Being autonomous is to form goals for how one's life is to go, including what kind of person to be. On this model of authenticity as autonomy, whether PCE is authentic depends on whether it helps a person to achieve her autonomous goals. For example, an individual might teach him or herself motivational strategies to overcome his or her naturally lazy disposition; another individual might use techniques from cognitive behavior therapy to overcome his or her propensity for generalized anxiety (e.g., Butler et al., [2006](#B11)) or shyness, or gregariousness, or bad temper, or gullibility. Such strategies may not render the individuals inauthentic, but rather assist them in removing barriers that otherwise prevent them from maximizing self-actualization. Correspondingly, if PCEs can, for example, help an individual to concentrate better so that he or she can achieve the goals he or she values, this acts in service of authenticity rather than undermines it. There is great human variation, and variation within individuals subject to many intrinsic and extrinsic factors (see Kahane and Savulescu, [2013](#B31)). Even if the authentic self were defined, it seems likely that many factors interfere and PCEs may reduce the effect of such influences. However, some deny that authenticity is reducible to autonomy. Such writers (e.g., Taylor, [1991](#B58)) appeal to a “real self.” But even on such an account, the real self may be complex and multifaceted. Often people have a range of qualities and they may use PCEs to bring out some of their qualities, while suppressing others. Thus, whether an enhanced self compromises the real self depends on what constitutes a person's real self and what the effect of the PCE is—both questions for cognitive science. If PCEs merely amplify, rather than add entirely new qualities, then they enable the self to evolve, rather than replacing one individual with a set of attributes with another with different attributes. There is a related but different concern about \*naturalness\*. The idea that enhancements will take us too far from what it is to be human altogether is often accompanied by the idea that too much technological intervention will lead to an over-mechanization of the mind. The activities in which we engage—and, more importantly, the ways in which we engage in them—are said to have a certain quality to them that makes them “human” activities (President's Council on Bioethics, [2003](#B44)). In this vein, Kass ([2003](#B32)) argues that since individuals play no role in bringing about the effects of biomedical interventions, they cannot understand these effects “in human terms.” His suggestion is that whereas the effects of studying or training are “intelligible” to us, the effects of direct interventions are not comprehensible and thus our use of them departs “from “genuine,” unmediated, and (in principle) self-transparent human activity” (p. 23). However, we argue that we make use of many directly-acting substances, in medicine and in leisure, that do not result in departure from “genuine” human activity. Just because their pharmacological mechanisms are not understood by the average person does not mean that they cannot be made sense of as part of a human narrative. Kass cites alcohol, caffeine and nicotine as not having the same unintelligible quality as direct biomedical interventions. He says this is because “we use these agents not as pure chemicals but in forms and social contexts that, arguably, give them a meaning different from what they would have were we to take them as pills” (p. 22). An obvious objection to Kass' resistance to PCEs would be to add PCEs to beverages, as caffeine currently is. It would then be “intelligible” in the same way that caffeine is said to be “intelligible.” Moreover, if intelligibility can be conferred by social context then the social context of, for example, studying, or conducting research should equally make PCEs part of a comprehensible human enterprise. Perhaps his distinction between the forms alcohol, caffeine, and nicotine tend to take, and the form of a simple pill, is supposed to indicate that the former are enjoyed for themselves, rather than being instrumental to achieve some goal. However, studies have reported that some individuals take PCEs for recreational purposes (see Smith and Farah, [2011](#B57)) and it is common knowledge that caffeine is regularly used exclusively for alertness and for performance enhancement. Even if it might be the case in lay people's current perceptions (cf. Faulmüller et al., [2013](#B22); Schelle et al., [2014](#B54)), from a normative stance it cannot be that form and context make all the difference between the human intelligibility of an espresso and a caffeine pill and a PCE. The core of such an “intelligibility” objection may be that PCEs and other new technologies work in ways entirely alien to the way the human mind normally works, adding a completely new way of being. For example, chips inserted into the human brain that allowed us to perceive other people's thoughts directly would be entirely new. Neuroscience can assist by unravelling the way the mind does work, and does not, and by enabling categorization of enhancers into those which harness natural processes, and those that introduce entirely new capacities. Most enhancers at present appear to harness existing neurobiological physiology, though exactly how many enhance performance remains to be determined. The ethical debate about authenticity and naturalness is unlikely to be advanced solely by the findings of neuroscientific research. The disagreement is partly a normative one about what constitutes the “real” self and whether our “real” selves are the selves we are most prone to being or the selves that we aspire to develop in to—or whether it makes sense to speak of “real” selves at all. Qualitative research, such as that conducted by Singh ([2005](#B56a)) or Bolt and Schermer ([2009](#B6)), will helpfully provide a clearer picture of the sorts of experiences individuals have when taking PCEs. In summary, it is important to recognize that most PCEs, if not all, harness innate biological systems, for example, affecting release, reuptake or sensitivity to neurotransmitters that cause cognitive activity. They do not at present introduce radical “new ways of being” divorced from the ordinary human way—they really just provide “more of the same.” Indeed, humans vary in the ways in which their cognitive systems function and in some cases, PCEs may bring those at the lower end of normal up to the level of function of those in the mid to upper range. More importantly, we suggest that what matters more than whether the experiences are in some sense authentic is whether the individual wants and values the effects of the PCE and whether the individual is autonomous in his or her decision to use PCE. This, we suggest, is a legitimate concern and is addressed in the following section. ### Coercion If PCEs were to become more commonplace, then employers might start to require their employees to use PCEs. The Academy of Medical Sciences et al. ([2012](#B1)) suggested in a recent report that “[O]ccupations that require particular patterns of focus could benefit from enhancements that facilitate achieving such patterns. For example, surgeons may need to be able to concentrate for extended periods, whereas other jobs such as air traffic control can require very rapid reactions during periods of relative uniformity. As an extrapolation to this, it is possible that in these high-responsibility occupations enhancement could be seen as a moral obligation, or even demanded by the public.” (p. 38, for a discussion see also Maslen et al., [in press](#B40)). The US Airforce has already approved the use of modafinil by its pilots (Caldwell and Caldwell, [2005](#B12)) and some medical practitioners are beginning to wonder whether enhancement might be required of them in the future (Rose and Curry, [2010](#B48)). Writing in the Journal of Surgical Research, surgeons have suggested that the use of PCEs may come to be required practice. They say, “The prospect of fatigued surgeons taking a prescription drug, such as modafinil, to allow them to operate for longer, and possibly to a higher standard, is perhaps not as far-fetched as some may suggest. This drug has already been trialed in emergency physicians, when performing non-medical-related tasks at the end of a nightshift.” (Warren et al., [2009](#B62), p. 168). Further, the authors note that there are “useful and warranted forms of coercion” (p. 170) such as forcing surgeons to undertake hygiene practices such as handwashing prior to and during surgery. Given that this \*coercion\* is acceptable, they go on to ask, “What will our employers feel about a drug that makes us less prone to error, able to work longer hours, or to operate more efficiently? Employers are able to request certain behavioral standards from their employees, dictate rest periods, and insist on abstinence from certain drugs to ensure that their doctors perform well—will a day arise where they can recommend or even insist on surgeons being artificially enhanced? This may seem fanciful, but recent work has suggested that a mixture of napping and caffeine attenuates fatigue in interns and thus should be adopted by hospital administration. Why not other types of stimulant?” (p. 171). The ethical objection often raised in this context is that, although it is thought to be reasonable to require certain things of employees, such as compulsory training and codes of conduct, requiring them to ingest psychoactive substances into their bodies is too demanding a requirement. It would require a compelling justification (perhaps pointing to the severity of harm that would be prevented through requiring enhancement) to trump the value we place on preserving the right individuals have to determine what happens to their bodies and minds (for discussion of the right to mental self-determination in relation to enhancement and other mental manipulation, see Bublitz and Merkel, [2014](#B10)). As far as possible, this right should be preserved, and this is especially the case where there is not enough evidence about the harms to which an employer would be subjecting his or her employee. Neuroscientific evidence will have a large role to play in understanding the seriousness of any proposed requirement. In addition to the risks posed by individual instances of PCE use, more data on the potential for dependency will be essential for this discussion. Whilst we \*might\* think it permissible to require some employees to take small, isolated personal risks, requiring them to do something that results in substance dependency would more comprehensively infringe an individual's autonomy. In this connection, although PCEs may become more common in the workplace, one of us has argued elsewhere that for these and other reasons, it is unlikely that there will ever be a legal obligation for a professional like a surgeon to take a PCE (Goold and Maslen, [2014](#B24)). At present, no employer requires employees to take caffeine. Caffeine is a PCE. Even if people were not directly coerced to take enhancers it could still be objected that permitting PCE use could result in indirect pressure to use them. The perception that others are taking substances that make them more productive could lead to the belief that taking them is necessary to keep up (Academy of Medical Sciences et al., 2012) and not taking PCEs might render one \*de facto\* ineligible for certain jobs (Chatterjee, [2004](#B13)). However, whether indirect pressure to take PCEs would in fact result in their more prevalent use is a question for social science. (For empirical data relating to this question, see Franke et al., [2011](#B22a) and Maier et al., [2013](#B38)). Neuroscientific research will have little to contribute to the debate about the limits of acceptable social pressure and restriction on employees' autonomy. However, as noted above, opposition to enforced PCE use is partly motivated by the current lack of evidence on long-term safety and efficacy. What we can legitimately require of people is closely related to what risks we can require them to take. Assessment of the legitimacy of requiring certain individuals to take PCEs will depend in large part on their medical safety and efficacy. If PCEs are very safe and efficacious, their use in life-saving/threatening professions (e.g., surgeons, politicians, truck drivers, airline pilots, etc.) may legitimately be required. ### Treatment vs. enhancement As noted in the introductory section, there is much disagreement about what should count as enhancement (c.f., Parens, [1998](#B43)). Sometimes this disagreement is framed as a debate about where \*treatment\* ends and \*enhancement\* begins. The distinction often made is that treatments serve to cure illness and preserve health whereas enhancements make people “better than well.” For example, Juengst ([1998](#B29)) defines enhancement as the term “usually used in bioethics to characterize interventions designed to improve human form or functioning beyond what is necessary to sustain or restore good health” (p. 29). However, a common objection to this distinction is that, in many cases, what we define as “healthy” and “normal” is arbitrary. This objection does not deny that there can be clear failures of function or physiology as a result of pathology which most would agree are inimical to good health, such as the effects of a brain hemorrhage or stroke. Rather, it emphasizes that the boundary between healthy and unhealthy cognition in many cases is a matter of where we choose to draw the line, not based on either statistically significant subfunctioning or pathology. For example, delimiting normal from defective powers of concentration when diagnosing ADHD is necessarily to engage in marking a categorical point on what is otherwise a continuum (c.f. Schermer and Bolt, [2011](#B56)). The point could be selected further to the left or right on that continuum of functioning. Would selecting a point which increased ADHD diagnosis increase the instances of individuals being treated or would some be receiving enhancement through the back door? Since the point is to some extent arbitrary, the corresponding labels of treatment and enhancement appear less meaningful in this context. Similarly, it is difficult to know whether to classify substances used to combat age-related cognitive decline as instances of treatment or enhancement. Drawing sharp lines could have the result that a young person with cognitive abilities just above the cut off for being classified as having a mental disability would be “enhanced” by a drug but the elderly person whose abilities slipped to a level still above the young person would be receiving “treatment” if given the same substance (for a similar example, see Sandberg, [2011](#B50)). Given the slipperiness of the distinction, one of us has argued (Savulescu et al., [2011](#B53)) that instead of trying to determine whether certain drugs or certain of their effects constitute treatment or enhancement, it is more coherent and useful to think of a continuum of well-being which can be increased or diminished by various interventions. It might be thought that evidence from neuroscience could adjudicate between instances of treatment and enhancement. If substances have discernable, discrete effects on different groups of people, it could be argued that these discrete effects mark the difference between a treatment and an enhancement. For example, although the way modafinil works is still unknown in detail (Minzenberg and Carter, [2008](#B37)), neurologists do know that the brain of the narcoleptic is not neurophysiologically equivalent to the brain of the sleep-deprived individual and, correspondingly, it might be hypothesized that the effects of modafinil on the two groups will differ. Most forms of narcolepsy are associated with a deficiency in the hypothalamic neurotransmitter orexin (Mignot, [2010](#B36)). The average sleep-deprived person, in contrast, does not exhibit such a deficiency. Accordingly, it might be thought that the more differences neuroscience can reveal between the narcoleptic and the non-narcoleptic, the better equipped we will be to distinguish between the treatment and enhancement effects of at least this PCE. However, such knowledge would still not provide a definitive solution to which effects we should refer to as treatment and which we should call enhancement. Modafinil is also prescribed for shift work sleep disorder (SWSD), which is a product of unusual working patterns affecting circadian rhythms, not of underlying neurophysiology (Åkerstedt and Wright, [2009](#B8)). This being said, it should be noted that not everyone who does shift work suffers from SWSD. This suggests that there must be some physiological or psychological difference between sufferers and non-sufferers and our lack of knowledge as to the cause of this difference does not make the disorder less of a treatable disorder. In labeling the prescription of PCEs for SWSD an instance of treatment, a normative or ethical decision is still being made about which conditions and patterns of functionality should attract medical attention and resources. We are also implicitly making an assessment that medical treatment is the just and appropriate course of action for sufferers of the disorder, rather than prioritizing a change away from shift work. Neither the individual's underlying neurophysiology nor the particular mechanism of action of the substance tells us anything about whether this decision is the correct one. One avenue through which neuroscience might illuminate the treatment vs. enhancement debate is by identifying pathology associated with mental or psychiatric disorders or limitations. So far, accurate tissue or cellular level pathological classification of psychiatric disease or disorder has eluded researchers. However, if psychiatric disorders could be characterized in the same way as neurological disorders, the presence of pathology would separate conditions which are diseases from those which constitute normal human variation. Given that PCEs are not universally available through the healthcare system, individuals without conditions for which PCEs are approved would currently have to obtain them through other, unauthorized routes. This means that some people will have access to them but others will not. Even if PCEs were available on an open market, there could still be financial or other barriers to their accessibility. We discuss this issue and its potential implications next. ### Distributive justice Society-level debates about PCE-related inequality consider \*distributive justice\*, and are related the question of whether PCEs will exacerbate existing socio-economic inequality. A common argument is that, as with many technologies, the rich and informed will have access to them whilst the poor and uninformed will not (e.g., Fukuyama, [2002](#B22b)). Assuming that cognitive enhancement confers some benefits, this will make those already at an advantage even better off. Whether this would in fact happen would depend on factors such as the affordability and accessibility of PCEs, as well as on the realities of their cognition-improving effects: the affordability and accessibility of PCEs will determine whether people are able to use them; the effects of the substances will determine whether they really put people who do so at an advantage. However, although there is the potential for PCEs to exacerbate unfairness if their distribution is unregulated, as one of us points out elsewhere, this is not a necessary consequence (Sandberg and Savulescu, [2011](#B51)): if PCEs were distributed according to a principle of justice such as “prioritarianism”—the principle that says that we should give priority to those who are worst off, but also aim to maximize well-being of everyone in society—then PCEs would be most accessible to the worst off, becoming less accessible (but not inaccessible) as need decreases. Further as we go on to discuss below in relation to competitive fairness, neuroscientific evidence supports the hypothesis that there is a base-line effect of many PCEs: their effects seem to depend on the subject's baseline working memory capacity. Individuals with low working-memory capacity improve while high-span individuals are either not affected or are even impaired (de Jongh et al., [2008](#B15)). This means that those most in need of PCE would benefit most from it, with those less in need not benefiting at all or even experiencing impairment from the same substance. Given this evidence, it has been suggested that enhancement might actually serve to \*reduce\* inequality (Bostrom and Sandberg, [2009](#B8)). However, whilst this could be true in terms of the equality of cognitive capacity, it must be remembered that cognitive capacity and socio-economic status are not always correlated: there would still be people with more opportunities and resources who could improve their prospects further. Whilst policy decisions about access to PCEs will be principally socio-political matters, those making the decisions will need to know how enhancers affect members of the population in order to best serve the interests of justice and equality. If PCEs have differential effects on those who are already worst off, this will be highly relevant to their permissibility and just distribution. Neuroscience research can thus contribute to ethical debate if effects in targeted and specified populations of ethical significance are studied. This would require ethically relevant population stratification. ### Competitive fairness and cheating The ethical discussion of whether using cognitive enhancers constitutes \*cheating\*—perhaps in exams or at work—is more nuanced than the simple question of whether taking enhancers is “against the rules.” It can extend beyond considerations of \*fairness in competitive contexts\* to ask whether personal achievements facilitated by PCEs are devalued for this reason (c.f., Schermer, [2008](#B55); Goodman, [2010](#B23); Santoni de Sio et al., [in press](#B52)). We suggest that evidence from neuroscience will help to develop the cheating debate in important ways. Below, we argue that three types of empirical inquiry are relevant to the ethical discussion. The first, the phenomenon of the “inverted U”—according to which the enhancing effects of PCEs are often baseline dependent and exhibit non-linear dose response curves —(de Jongh et al., [2008](#B15); Husain and Mehta, [2011](#B28)), is relevant to efficacy questions involved in debates about cheating. The second type of study relevant to the debate is that which seeks to identify the particular neural systems affected by different substances, leading to disparate effects (e.g., Lanni et al., [2008](#B33)): whether a substance improves creativity or rote learning may matter for some possible conceptions of what constitutes cheating. Similarly, whether a substance improves motivation and task enjoyment vs. memory capacity might matter for those who place a lot of value on success requiring effort. Third, we argue that the neuroscientific evidence pointing to the likelihood of cognitive trade-offs (de Jongh et al., [2008](#B15); Husain and Mehta, [2011](#B28)) adds an underdeveloped dimension to the cheating debate: if the complaint is that achievements facilitated by PCE are devalued because they do not involve enough personal sacrifice, then evidence suggesting that enhancement in some domains comes at the cost of impairments in others offers a challenge to this view. #### The inverted U curve and baseline dependency Neuroscientific research so far shows that the effects of many purported PCEs are base-line dependent and have an inverted U-shaped dose-response curve (de Jongh et al., [2008](#B15); Husain and Mehta, [2011](#B28)). This is important to the cheating debate as it means that some individuals will benefit from taking PCEs whereas others will gain no benefit and might even be impaired: low performing individuals will tend to be on the upward slope of the inverted-U and so benefit from a substance that moves them further up this slope. High performing individuals, on the other hand, will tend to be at the peak of the inverted U and will therefore become impaired by a substance that increases neurotransmitter levels further. If neuroscience were to more precisely identify the neurological profiles of those who are able to benefit from PCEs and those who are not then ethicists would be able to consider in greater detail whether the prospect of some being able to enhance whilst others cannot counts more decisively against PCE in competitive contexts than if all could enhance in these contexts. They would need to consider whether it is the case that enhancement is only fair if everyone could (in principle) avail themselves of it or whether is it permissible given that some are physiologically denied the possibility of improving. #### Disparate effects of different PCEs Although the exact mechanisms of substances like methylphenidate and modafinil are not yet fully understood, researchers have begun to investigate which PCEs affect which underlying systems, and with which effects (Lanni et al., [2008](#B33); Smith and Farah, [2011](#B57)). Although cognitive functions necessarily interact, attempts have been made to ascertain the primary cognitive functions improved by particular PCEs based on their effects on neurotransmitters. Husain and Mehta ([2011](#B28)) explain that “a simple mapping between a specific neurotransmitter and a particular cognitive function—such as [working memory]—[… ] seems untenable. However, subtle but important differences in the precise processes modulated might provide some discriminating value: for instance, dopamine has an established role in reinforcement learning in response to rewards, whereas serotonin seems to modulate reinforcement learning for aversive stimuli.” (p. 29). Pursuing such discrimination, Lanni et al. ([2008](#B33)) review the neuroscience literature investigating the neuronal circuits, neurotransmitters and molecular events underlying the cognitive domains of memory, attention, and creativity to distinguish the effects of different enhancement substances. Elsewhere, Smith and Farah ([2011](#B57)) review the cognitive neuroscience literature to examine whether (and which) prescription stimulants improve learning, working memory, cognitive control, and other executive functions. If neuroscientific research were able to distinguish between the effects of different PCEs this could have some implications for discussions about cheating. This is again effect stratification. Combined with population stratification, neuroscience research could bring us closer to understanding what effect this particular PCE will have in this person. This reflects the move to “personalized medicine” and might be dubbed “personalized enhancement.” Only when we can predict the \*personal\* benefits and costs of enhancement can policy be truly informed and ethical. It might be thought that the enhancement of some cognitive functions is more unfair than the enhancement of others. For example, the enhancement of creative thinking might be thought to constitute more significant cheating than improving wakefulness or even memory capacity. Imagine someone who says “when I take enhancers my work is no better, I can just do more of the same for longer” vs. someone who says “when I taken enhancers my work is much better than I can do without them.” Having links with the debate about authenticity, it is as if the former individual is enabled to make better use of his or her own cognitive resources, whereas the latter is given new cognitive resources upon which he or she can draw. Those who think PCE use is unfair because the achievement is not a reflection of the person's natural abilities to solve and create might be less concerned by a PCE that simply allowed more efficient work of the standard the person could naturally achieve. A PCE that promoted wakefulness might allow an individual to work for longer but it will not come up with ideas on his or her behalf. Of course, it is important to remember that a PCE that improved creativity still has its effects on and through the individual's own brain. What will be interesting for ethicists to discuss is whether “assistance” with time management and efficiency is relevantly different to “assistance” with the content of ideas (if, indeed, we want to characterize the respective effects in this way). Practical consequences might be to consider certain substances unfair for certain types of tests or for entry into certain types of employment: employers might only be troubled by the use of PCEs, the effects of which are \*necessary\* to carry out the job. This would be a practical consideration: could the employee continue to work without the PCE? For example, an architect who could only perform satisfactorily when taking a substance like modafinil that seems to improves spatial planning and visual pattern recognition memory (Turner et al., [2003](#B61)) might be thought to be a higher risk employee than one who uses a memory enhancer which enables him or her to remember the names of building materials that he could look up without problem in the absence of the substance. Further, neuroscientific research that could distinguish between substances that enhance the \*effectiveness\* of cognitive capacities, such as working memory, from those that instead (or additionally) increase \*motivation\* could also have implications for the competitive fairness debate. In the ethical literature, the point is sometimes made that it is effort and striving that makes achievements intelligible and valuable. For example, Fox ([2005](#B22c)) argues that “[b]ecause they act directly on the human body and mind, biotechnological enhancements tempt us to shirk individual striving and struggle” (p. 1150). A common rebuttal to this type of argument is that, whilst PCEs can make efforts more effective, they do not replace the need for dedicated, sustained study—striving and struggle is still required in order to achieve. For example, Greely ([2010](#B25a)) notes that “the more plausible cognitive enhancements would not eliminate the need to study; they would just make studying more effective” (p. 6). If, however, there were a significant enough effect of a PCE on motivation and/or task enjoyment, then it would be open to ethicists to argue that this \*does\* in some sense reduce the amount of effort that the person puts in. The drive to work or achieve no longer emanates from the individual and no struggle is encountered. On the motivating effects of prescription stimulants, Smith and Farah ([2011](#B57)) write: “Another empirical question concerns the effects of stimulants on motivation, which can affect academic and occupational performance independent of cognitive ability. Volkow et al. ([2004](#B61a)) showed that [methylphenidate] increased participants' self-rated interest in a relatively dull mathematical task. This is consistent with student reports that prescription stimulants make schoolwork seem more interesting (e.g., DeSantis et al., [2008](#B15a)). To what extent are the motivational effects of prescription stimulants distinct from their cognitive effects, and to what extent might they be more robust to differences in individual traits, dosage and task? Are the motivational effects of stimulants responsible for their usefulness when taken by normal healthy individuals for cognitive enhancement?” (p. 735). If particular PCEs were shown to significantly improve motivation and/or task enjoyment whilst others only improve effectiveness, ethicists would need to consider whether there is any relevant difference between enhancing motivation and enhancing effectiveness and, if so, what the implications would be for the value of resulting achievements. #### Enhancement is likely to involve trade-offs Research suggests that enhancing one domain of cognition might come at the cost of impairing another. de Jongh et al. ([2008](#B15)) review evidence suggesting trade-offs between long-term memory and working memory; between stability and flexibility of long-term memory; between stability and flexibility of working memory; and perhaps, they conjecture, between cognition and mood. If a PCE comes at a cost—and, especially, a mental cost—this could also add a new dimension to the debate about cheating and the value of achievements. In terms of gaining an unfair advantage over others in exams and other competitive tasks, the trade-offs would be relevant if the test required exercise of \*both\* the enhanced and the impaired capacity. Whilst the individual gains some advantage in some parts of the test, he or she would be disadvantaged in other parts. More generally, neuroscientific evidence of trade-offs are interesting to the debate about fairness and the value of achievements because some of the objections rest heavily on the idea that using PCEs means that no sacrifice—usually conceived as sacrifice of time, energy or other opportunities—is made by the individual. For example, Kass ([2003](#B32)) says: “Yet in those areas of human life in which excellence has until now been achieved only by discipline and effort, the attainment of those achievements by means of drugs, genetic engineering, or implanted devices looks to be “cheating” or “cheap.” We believe—or until only yesterday believed—that people should work hard for their achievements. “Nothing good comes easily.”” (p. 21). If enhancement of one domain of cognition comes at the cost of another then it does seem that some sort of sacrifice has been made. We might conceive of an individual who chooses to enhance his or her working memory such that he or she can solve complicated puzzles quickly. This same individual might accept that this enhancement comes at the cost of him or her finding it harder to recall facts and experiences from longer ago. Accordingly, whilst the physical act of ingesting a substance might be easy, there is a sense in which the enhanced capacity did not come easily—it did not come without personal cost. Whilst the conceptually most interesting trade-offs will involve impairments to cognitive capacities—like for like—it should also be noted that the more general side effects of PCEs (discussed in relation to medical safety above) also constitute an additional sort of “cost” to enhancement. The evidence on medical safety reviewed in section Medical Safety and Effectiveness suggests that PCE use will always come at a cost and may involve multiple costs of different kinds. The number and nature of these unavoidable costs constitute further challenge to the view that achievements facilitated by enhancement involve no sacrifice. Important to note is that these costs of a trade-off are not like financial costs, which can be trivial and will constitute diminishment only insofar as they prevent the individual from making other purchases important to him or her. Rather, the costs of an enhancement trade-off are often mental costs—like for like—and are of a kind much more likely to constitute diminishment. Thus, neuroscientific research poses questions for those engaged in the cheating debate about whether there are relevant differences between different various costs of achievement—effort, opportunity, physiological side effects, cognitive trade-offs—and which (if any) are required for achievements to involve a sufficient level of sacrifice. Conclusion ---------- We have reviewed six of the main issues debated by ethicists working on PCE. Often, their purpose in debating these issues is to clarify concepts and normative positions, which then serve as a basis for recommending how society—and especially those tasked with its regulation—should respond to the emergence of PCEs. We have argued that whilst some of these issues are mostly political (coercion) or metaphysical (what constitutes authenticity), others have much to gain from emerging neuroscientific research. As well as providing data on safety and effectiveness, neuroscience will also allow a more fine-grained debate about whether the effects of some PCEs are more unfair than others in competitive contexts and whether employers should be more wary of employee reliance on some PCEs than on others. Further, due to emerging evidence on trade-offs, those who object to PCE on the ground that it facilitates individual gain without any attendant pain will have to explain why accepting an associated impairment in exchange for an enhancement is not a relevant sacrifice. Although we anticipate that ethicists will be far from stumped by this challenge, we hope to have demonstrated that it will, in large part, be though responding to emerging scientific evidence that normative accounts become more refined, complete and practically relevant. In general, neuroscience can contribute to the formation of ethical policy on PCEs by adopting a “personalized” approach: personalized enhancement. Fine grained and stratified research should seek to identify specific risks, benefits, and trade-offs in small ethically relevant populations, or ideally in individuals. In doing this, according to the ethical values principles and criteria we choose, we can form policy on who should access which PCEs in which ways. ### Conflict of interest statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Acknowledgments --------------- This work was supported by the Wellcome Trust [086041/Z/08/Z]; the Oxford Martin School; and the Uehiro Foundation on Ethics and Education. References ---------- \* Academy of Medical Sciences, Royal Society, British Academy, Royal Academy of Engineering. (2012). Human Enhancement and the Future of Work (Report from Joint Workshop). Available online at: (Accessed 22 May 2013). \* Advokat C. (2010). What are the cognitive effects of stimulant medication? Emphasis on adults with attentiondeficit/hyperactivity disorder. Neurosci. Biobehav. Rev. 34, 1256–1266 10.1016/j.neubiorev.2010.03.006 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/20381522)] [[CrossRef](//doi.org/10.1016%2Fj.neubiorev.2010.03.006)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Neurosci.+Biobehav.+Rev&title=What+are+the+cognitive+effects+of+stimulant+medication?+Emphasis+on+adults+with+attentiondeficit/hyperactivity+disorder&author=C.+Advokat&volume=34&publication\_year=2010&pages=1256-1266&pmid=20381522&doi=10.1016/j.neubiorev.2010.03.006&)] \* Beglinger L. J., Gaydos B. L., Kareken D. A., Tangphao-Daniels O., Siemers E. R., Mohs R. C. (2004). Neuropsychological test performance in healthy volunteers before and after donepezil administration. J. Psychopharmacol. 18, 102–108 10.1177/0269881104040248 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/15107192)] [[CrossRef](//doi.org/10.1177%2F0269881104040248)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=J.+Psychopharmacol&title=Neuropsychological+test+performance+in+healthy+volunteers+before+and+after+donepezil+administration&author=L.+J.+Beglinger&author=B.+L.+Gaydos&author=D.+A.+Kareken&author=O.+Tangphao-Daniels&author=E.+R.+Siemers&volume=18&publication\_year=2004&pages=102-108&pmid=15107192&doi=10.1177/0269881104040248&)] \* Beglinger L. J., Tangphao-Daniels O., Kareken D. A., Zhang L., Mohs R., Siemers E. R. (2005). Neuropsychological test performance in healthy elderly volunteers before and after donepezil administration: a randomized, controlled study. J. Clin. Psychopharmacol. 25, 159–165 10.1097/01.jcp.0000155822.51962.b4 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/15738747)] [[CrossRef](//doi.org/10.1097%2F01.jcp.0000155822.51962.b4)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=J.+Clin.+Psychopharmacol&title=Neuropsychological+test+performance+in+healthy+elderly+volunteers+before+and+after+donepezil+administration:+a+randomized,+controlled+study&author=L.+J.+Beglinger&author=O.+Tangphao-Daniels&author=D.+A.+Kareken&author=L.+Zhang&author=R.+Mohs&volume=25&publication\_year=2005&pages=159-165&pmid=15738747&doi=10.1097/01.jcp.0000155822.51962.b4&)] \* Bolt I., Schermer M. (2009). Psychopharmacological enhancers: enhancing identity? Neuroethics 2, 103–111 10.1007/s12152-008-9031-7 [[CrossRef](//doi.org/10.1007%2Fs12152-008-9031-7)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Neuroethics&title=Psychopharmacological+enhancers:+enhancing+identity?&author=I.+Bolt&author=M.+Schermer&volume=2&publication\_year=2009&pages=103-111&doi=10.1007/s12152-008-9031-7&)] \* Boot B. P., Partridge B., Hall W. (2012). Letter to the editor: better evidence for safety and efficacy is needed before neurologists prescribe drugs for neuroenhancement to healthy people. Neurocase 18, 181–184 10.1080/13554794.2011.588174 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/22007842)] [[CrossRef](//doi.org/10.1080%2F13554794.2011.588174)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Neurocase&title=Letter+to+the+editor:+better+evidence+for+safety+and+efficacy+is+needed+before+neurologists+prescribe+drugs+for+neuroenhancement+to+healthy+people&author=B.+P.+Boot&author=B.+Partridge&author=W.+Hall&volume=18&publication\_year=2012&pages=181-184&pmid=22007842&doi=10.1080/13554794.2011.588174&)] \* Bostrom N., Sandberg A. (2009). Cognitive enhancement: methods, ethics, regulatory challenges. Sci. Eng. Ethics 15, 311–341 10.1007/s11948-009-9142-5 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/19543814)] [[CrossRef](//doi.org/10.1007%2Fs11948-009-9142-5)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Sci.+Eng.+Ethics&title=Cognitive+enhancement:+methods,+ethics,+regulatory+challenges&author=N.+Bostrom&author=A.+Sandberg&volume=15&publication\_year=2009&pages=311-341&pmid=19543814&doi=10.1007/s11948-009-9142-5&)] \* Bublitz J. C., Merkel R. (2009). Autonomy and authenticity of enhanced personality traits. Bioethics 23, 360–374 10.1111/j.1467-8519.2009.01725.x [[PubMed](https://pubmed.ncbi.nlm.nih.gov/19527264)] [[CrossRef](//doi.org/10.1111%2Fj.1467-8519.2009.01725.x)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Bioethics&title=Autonomy+and+authenticity+of+enhanced+personality+traits&author=J.+C.+Bublitz&author=R.+Merkel&volume=23&publication\_year=2009&pages=360-374&pmid=19527264&doi=10.1111/j.1467-8519.2009.01725.x&)] \* Bublitz J. C., Merkel R. (2014). Crimes against minds: on mental manipulations, harms and a human right to mental self-determination. Crim. Law and Philos. 8, 51–77 10.1007/s11572-012-9172-y [[CrossRef](//doi.org/10.1007%2Fs11572-012-9172-y)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Crim.+Law+and+Philos&title=Crimes+against+minds:+on+mental+manipulations,+harms+and+a+human+right+to+mental+self-determination&author=J.+C.+Bublitz&author=R.+Merkel&volume=8&publication\_year=2014&pages=51-77&doi=10.1007/s11572-012-9172-y&)] \* Butler A. C., Chapman J. E., Forman E. M., Beck A. T. (2006). The empirical status of cognitive-behavioral therapy: a review of meta-analyses. Clin. Psychol. Rev. 26, 17–31 10.1016/j.cpr.2005.07.003 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/16199119)] [[CrossRef](//doi.org/10.1016%2Fj.cpr.2005.07.003)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Clin.+Psychol.+Rev&title=The+empirical+status+of+cognitive-behavioral+therapy:+a+review+of+meta-analyses&author=A.+C.+Butler&author=J.+E.+Chapman&author=E.+M.+Forman&author=A.+T.+Beck&volume=26&publication\_year=2006&pages=17-31&pmid=16199119&doi=10.1016/j.cpr.2005.07.003&)] \* Caldwell J. A., Caldwell J. L. (2005). Fatigue in military aviation: an overview of us military-approved pharmacological countermeasures. Aviat. Space Environ. Med. 76(7 Suppl.), C39–C51 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/16018329)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Aviat.+Space+Environ.+Med&title=Fatigue+in+military+aviation:+an+overview+of+us+military-approved+pharmacological+countermeasures&author=J.+A.+Caldwell&author=J.+L.+Caldwell&volume=76&issue=7+Suppl.&publication\_year=2005&pages=C39-C51&pmid=16018329&)] \* Chatterjee A. (2004). Cosmetic neurology the controversy over enhancing movement, mentation, and mood. Neurology 63, 968–974 10.1212/01.WNL.0000138438.88589.7C [[PubMed](https://pubmed.ncbi.nlm.nih.gov/15452285)] [[CrossRef](//doi.org/10.1212%2F01.WNL.0000138438.88589.7C)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Neurology&title=Cosmetic+neurology+the+controversy+over+enhancing+movement,+mentation,+and+mood&author=A.+Chatterjee&volume=63&publication\_year=2004&pages=968-974&pmid=15452285&doi=10.1212/01.WNL.0000138438.88589.7C&)] \* DeGrazia D. (2005). Human Identity and Bioethics. Cambridge: Cambridge University Press [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Human+Identity+and+Bioethics&author=D.+DeGrazia&publication\_year=2005&)] \* de Jongh R., Bolt I., Schermer M., Olivier B. (2008). Botox for the brain: enhancement of cognition, mood and pro-social behavior and blunting of unwanted memories. Neurosci. Biobehav. Rev. 32, 760–776 10.1016/j.neubiorev.2007.12.001 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/18295885)] [[CrossRef](//doi.org/10.1016%2Fj.neubiorev.2007.12.001)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Neurosci.+Biobehav.+Rev&title=Botox+for+the+brain:+enhancement+of+cognition,+mood+and+pro-social+behavior+and+blunting+of+unwanted+memories&author=R.+de+Jongh&author=I.+Bolt&author=M.+Schermer&author=B.+Olivier&volume=32&publication\_year=2008&pages=760-776&pmid=18295885&doi=10.1016/j.neubiorev.2007.12.001&)] \* DeSantis A. D., Webb E. M., Noar S. M. (2008). Illicit use of prescription ADHD medications on a college campus: a multimethodological approach. J. Am. Coll. Health 57, 315–324 10.3200/JACH.57.3.315-324 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/18980888)] [[CrossRef](//doi.org/10.3200%2FJACH.57.3.315-324)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=J.+Am.+Coll.+Health&title=Illicit+use+of+prescription+ADHD+medications+on+a+college+campus:+a+multimethodological+approach&author=A.+D.+DeSantis&author=E.+M.+Webb&author=S.+M.+Noar&volume=57&publication\_year=2008&pages=315-324&pmid=18980888&doi=10.3200/JACH.57.3.315-324&)] \* Drabiak-Syed K. (2011). Reining in the pharmacological enhancement train: we should remain vigilant about regulatory standards for prescribing controlled substances. J. Law Med. Ethics 39, 272–279 10.1111/j.1748-720X.2011.00596.x [[PubMed](https://pubmed.ncbi.nlm.nih.gov/21561522)] [[CrossRef](//doi.org/10.1111%2Fj.1748-720X.2011.00596.x)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=J.+Law+Med.+Ethics&title=Reining+in+the+pharmacological+enhancement+train:+we+should+remain+vigilant+about+regulatory+standards+for+prescribing+controlled+substances&author=K.+Drabiak-Syed&volume=39&publication\_year=2011&pages=272-279&pmid=21561522&doi=10.1111/j.1748-720X.2011.00596.x&)] \* Dresler M., Sandberg A., Ohla K., Bublitz C., Trenado C., Mroczko-Wasowicz A., et al. (2012). Non-pharmacological cognitive enhancement. Neuropharmacology 64, 529–543 10.1016/j.neuropharm.2012.07.002 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/22828638)] [[CrossRef](//doi.org/10.1016%2Fj.neuropharm.2012.07.002)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Neuropharmacology&title=Non-pharmacological+cognitive+enhancement&author=M.+Dresler&author=A.+Sandberg&author=K.+Ohla&author=C.+Bublitz&author=C.+Trenado&volume=64&publication\_year=2012&pages=529-543&pmid=22828638&doi=10.1016/j.neuropharm.2012.07.002&)] \* Elliott C. (1999). A Philosophical Disease: Bioethics, Culture and Identity. Psychology Press [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=A+Philosophical+Disease:+Bioethics,+Culture+and+Identity&author=C.+Elliott&publication\_year=1999&)] \* Elliott R., Sahakian B. J., Matthews K., Bannerjea A., Rimmer J., Robbins T. W. (1997). Effects of methylphenidate on spatial working memory and planning in healthy young adults. Psychopharmacology 131, 196–206 10.1007/s002130050284 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/9201809)] [[CrossRef](//doi.org/10.1007%2Fs002130050284)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Psychopharmacology&title=Effects+of+methylphenidate+on+spatial+working+memory+and+planning+in+healthy+young+adults&author=R.+Elliott&author=B.+J.+Sahakian&author=K.+Matthews&author=A.+Bannerjea&author=J.+Rimmer&volume=131&publication\_year=1997&pages=196-206&pmid=9201809&doi=10.1007/s002130050284&)] \* European Medicines Agency. (2010). Questions and Answers on the Review of Medicines Containing Modafinil. \*EMA/CHMP/\*460496/2010. Available online at: \* Farah M. J., Smith M. E., Ilieva I., Hamilton R. H. (2014). Cognitive enhancement. Wiley Interdiscipl. Rev. Cogn. Sci. 5, 95–103 10.1002/wcs.1250 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/26304298)] [[CrossRef](//doi.org/10.1002%2Fwcs.1250)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Wiley+Interdiscipl.+Rev.+Cogn.+Sci&title=Cognitive+enhancement&author=M.+J.+Farah&author=M.+E.+Smith&author=I.+Ilieva&author=R.+H.+Hamilton&volume=5&publication\_year=2014&pages=95-103&doi=10.1002/wcs.1250&)] \* Faulmüller N., Maslen H., Santoni de Sio F. (2013). The indirect psychological costs of cognitive enhancement. Am. J. Bioeth. 13, 45–47 10.1080/15265161.2013.794880 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/23767441)] [[CrossRef](//doi.org/10.1080%2F15265161.2013.794880)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Am.+J.+Bioeth&title=The+indirect+psychological+costs+of+cognitive+enhancement&author=N.+Faulmüller&author=H.+Maslen&author=F.+Santoni+de+Sio&volume=13&publication\_year=2013&pages=45-47&pmid=23767441&doi=10.1080/15265161.2013.794880&)] \* Fox D. (2005). Safety, Efficacy, and Authenticity: The Gap between Ethics and law in FDA Decisionmaking. Available online at: \* Franke A. G., Bonertz C., Christmann M., Huss M., Fellgiebel A., Hildt E., et al. (2011). Non-medical use of prescription stimulants and illicit use of stimulants for cognitive enhancement in pupils and students in Germany. Pharmacopsychiatry 44, 60–66 10.1055/s-0030-1268417 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/21161883)] [[CrossRef](//doi.org/10.1055%2Fs-0030-1268417)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Pharmacopsychiatry&title=Non-medical+use+of+prescription+stimulants+and+illicit+use+of+stimulants+for+cognitive+enhancement+in+pupils+and+students+in+Germany&author=A.+G.+Franke&author=C.+Bonertz&author=M.+Christmann&author=M.+Huss&author=A.+Fellgiebel&volume=44&publication\_year=2011&pages=60-66&pmid=21161883&doi=10.1055/s-0030-1268417&)] \* Fukuyama F. (2002). Our Post human Future: Consequences of the Biotechnology Revolution. New York, NY: Farrar, Straus and Giroux [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Our+Post+human+Future:+Consequences+of+the+Biotechnology+Revolution&author=F.+Fukuyama&publication\_year=2002&)] \* Goodman R. (2010). Cognitive enhancement, cheating, and accomplishment. Kennedy Inst. Ethics J. 20, 145–160 10.1353/ken.0.0309 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/20653250)] [[CrossRef](//doi.org/10.1353%2Fken.0.0309)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Kennedy+Inst.+Ethics+J&title=Cognitive+enhancement,+cheating,+and+accomplishment&author=R.+Goodman&volume=20&publication\_year=2010&pages=145-160&pmid=20653250&doi=10.1353/ken.0.0309&)] \* Goold I., Maslen H. (2014). Must the surgeon take the pill? Negligence duty in the context of cognitive enhancement. Mod. Law Rev. 77, 60–86 10.1111/1468-2230.12056 [[CrossRef](//doi.org/10.1111%2F1468-2230.12056)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Mod.+Law+Rev&title=Must+the+surgeon+take+the+pill?+Negligence+duty+in+the+context+of+cognitive+enhancement&author=I.+Goold&author=H.+Maslen&volume=77&publication\_year=2014&pages=60-86&doi=10.1111/1468-2230.12056&)] \* Greely H., Sahakian B., Harris J., Kessler R. C., Gazzaniga M., Campbell P., et al. (2008). Towards responsible use of cognitive-enhancing drugs by the healthy. Nature 456, 702–705 10.1038/456702a [[PubMed](https://pubmed.ncbi.nlm.nih.gov/19060880)] [[CrossRef](//doi.org/10.1038%2F456702a)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Nature&title=Towards+responsible+use+of+cognitive-enhancing+drugs+by+the+healthy&author=H.+Greely&author=B.+Sahakian&author=J.+Harris&author=R.+C.+Kessler&author=M.+Gazzaniga&volume=456&publication\_year=2008&pages=702-705&pmid=19060880&doi=10.1038/456702a&)] \* Greely H. T. (2010). Enhancing brains: what are we afraid of? in Cerebrum: the Dana Forum on Brain Science, Vol. 2010 (Dana Foundation). Available online at: [[PMC free article](/pmc/articles/PMC3574770/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/23447760)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Cerebrum:+the+Dana+Forum+on+Brain+Science&title=Enhancing+brains:+what+are+we+afraid+of?&author=H.+T.+Greely&volume=Vol.+2010&publication\_year=2010&)] \* Gron G., Kirstein M., Thielscher A., Riepe M. W., Spitzer M. (2005). Cholinergic enhancement of episodic memory in healthy young adults. Psychopharmacology 182, 170–179 10.1007/s00213-005-0043-2 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/16021483)] [[CrossRef](//doi.org/10.1007%2Fs00213-005-0043-2)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Psychopharmacology&title=Cholinergic+enhancement+of+episodic+memory+in+healthy+young+adults&author=G.+Gron&author=M.+Kirstein&author=A.+Thielscher&author=M.+W.+Riepe&author=M.+Spitzer&volume=182&publication\_year=2005&pages=170-179&pmid=16021483&doi=10.1007/s00213-005-0043-2&)] \* Hall W. D., Lucke J. C. (2010). Enhancement uses of neuropharmaceuticals: more caution and skepticism needed. Addiction 105, 2041–2043 10.1111/j.1360-0443.2010.03211.x [[PubMed](https://pubmed.ncbi.nlm.nih.gov/21054609)] [[CrossRef](//doi.org/10.1111%2Fj.1360-0443.2010.03211.x)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Addiction&title=Enhancement+uses+of+neuropharmaceuticals:+more+caution+and+skepticism+needed&author=W.+D.+Hall&author=J.+C.+Lucke&volume=105&publication\_year=2010&pages=2041-2043&pmid=21054609&doi=10.1111/j.1360-0443.2010.03211.x&)] \* Husain M., Mehta M. A. (2011). Cognitive enhancement by drugs in health and disease. Trends Cogn. Sci. 15, 28–36 10.1016/j.tics.2010.11.002 [[PMC free article](/pmc/articles/PMC3020278/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/21146447)] [[CrossRef](//doi.org/10.1016%2Fj.tics.2010.11.002)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Trends+Cogn.+Sci&title=Cognitive+enhancement+by+drugs+in+health+and+disease&author=M.+Husain&author=M.+A.+Mehta&volume=15&publication\_year=2011&pages=28-36&pmid=21146447&doi=10.1016/j.tics.2010.11.002&)] \* Juengst E. T. (1998). What does enhancement mean?, in Enhancing Human Traits: Ethical and Social Implications, ed Parens E. (Georgetown University press; ), 29–47 [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Enhancing+Human+Traits:+Ethical+and+Social+Implications&author=E.+T.+Juengst&publication\_year=1998&)] \* Juth N. (2011). Enhancement, autonomy, and authenticity, in Enhancing Human Capacities, eds Savulescu J., ter Meulen R., Kahane G. (Oxford: Wiley-Blackwell; ), 34–48 [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Enhancing+Human+Capacities&author=N.+Juth&publication\_year=2011&)] \* Kahane G., Savulescu J. (2013). Normal human variation: refocussing the enhancement debate. Bioethics. [Epub ahead of print]. 10.1111/bioe.12045 [[PMC free article](/pmc/articles/PMC4278839/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/23906367)] [[CrossRef](//doi.org/10.1111%2Fbioe.12045)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Bioethics&title=Normal+human+variation:+refocussing+the+enhancement+debate&author=G.+Kahane&author=J.+Savulescu&publication\_year=2013&pmid=23906367&doi=10.1111/bioe.12045&)] \* Kass L. (2003). Ageless bodies, happy souls. New Atlantis 1, 9–28 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/15584192)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=New+Atlantis&title=Ageless+bodies,+happy+souls&author=L.+Kass&volume=1&publication\_year=2003&pages=9-28&pmid=15584192&)] \* Kroutil L. A., Van Brunt D. L., Herman-Stahl M. A., Heller D. C., Bray R. M., Penne M. A. (2006). Nonmedical use of prescription stimulants in the United States. Drug Alcohol Depend. 84, 135–143 10.1016/j.drugalcdep.2005.12.011 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/16480836)] [[CrossRef](//doi.org/10.1016%2Fj.drugalcdep.2005.12.011)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Drug+Alcohol+Depend&title=Nonmedical+use+of+prescription+stimulants+in+the+United+States&author=L.+A.+Kroutil&author=D.+L.+Van+Brunt&author=M.+A.+Herman-Stahl&author=D.+C.+Heller&author=R.+M.+Bray&volume=84&publication\_year=2006&pages=135-143&pmid=16480836&doi=10.1016/j.drugalcdep.2005.12.011&)] \* Lanni C., Lenzken S. C., Pascale A., Del Vecchio I., Racchi M., Pistoia F., et al. (2008). Cognition enhancers between treating and doping the mind. Pharmacol. Res. 57, 196–213 10.1016/j.phrs.2008.02.004 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/18353672)] [[CrossRef](//doi.org/10.1016%2Fj.phrs.2008.02.004)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Pharmacol.+Res&title=Cognition+enhancers+between+treating+and+doping+the+mind&author=C.+Lanni&author=S.+C.+Lenzken&author=A.+Pascale&author=I.+Del+Vecchio&author=M.+Racchi&volume=57&publication\_year=2008&pages=196-213&pmid=18353672&doi=10.1016/j.phrs.2008.02.004&)] \* Maher B. (2008). Poll results: look who's doping. Nature 452, 674–675 10.1038/452674a [[PubMed](https://pubmed.ncbi.nlm.nih.gov/18401370)] [[CrossRef](//doi.org/10.1038%2F452674a)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Nature&title=Poll+results:+look+who's+doping&author=B.+Maher&volume=452&publication\_year=2008&pages=674-675&pmid=18401370&doi=10.1038/452674a&)] \* Mehta M. A., Owen A. M., Sahakian B. J., Mavaddat N., Pickard J. D., Robbins T. W. (2000). Methylphenidate enhances working memory by modulating discrete frontal and parietal lobe regions in the human brain. J. Neurosci. 20:RC65 [[PMC free article](/pmc/articles/PMC6772505/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/10704519)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=J.+Neurosci&title=Methylphenidate+enhances+working+memory+by+modulating+discrete+frontal+and+parietal+lobe+regions+in+the+human+brain&author=M.+A.+Mehta&author=A.+M.+Owen&author=B.+J.+Sahakian&author=N.+Mavaddat&author=J.+D.+Pickard&volume=20&publication\_year=2000&pages=RC65&pmid=10704519&)] \* Mignot E. (2010). Narcolepsy: genetic predisposition and pathophysiology, in Narcolepsy: A Clinical Guide, eds Goswami M., Pandi-Perumal S. R., Thorpy M. J. (New York, NY: Springer; ), 3–21 10.1007/978-1-4419-0854-4\\_1 [[CrossRef](//doi.org/10.1007%2F978-1-4419-0854-4\_1)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Narcolepsy:+A+Clinical+Guide&author=E.+Mignot&publication\_year=2010&)] \* Minzenberg M. J., Carter C. S. (2008). Modafinil: a review of neurochemical actions and effects on cognition. Neuropsychopharmacology 33, 1477–1502 10.1038/sj.npp.1301534 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/17712350)] [[CrossRef](//doi.org/10.1038%2Fsj.npp.1301534)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Neuropsychopharmacology&title=Modafinil:+a+review+of+neurochemical+actions+and+effects+on+cognition&author=M.+J.+Minzenberg&author=C.+S.+Carter&volume=33&publication\_year=2008&pages=1477-1502&pmid=17712350&doi=10.1038/sj.npp.1301534&)] \* Maier L. J., Liechti M. E., Herzig F., Schaub M. P. (2013). To dope or not to dope: neuroenhancement with prescription drugs and drugs of abuse among Swiss university students. PLoS ONE 8:e77967 10.1371/journal.pone.0077967 [[PMC free article](/pmc/articles/PMC3827185/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/24236008)] [[CrossRef](//doi.org/10.1371%2Fjournal.pone.0077967)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=PLoS+ONE&title=To+dope+or+not+to+dope:+neuroenhancement+with+prescription+drugs+and+drugs+of+abuse+among+Swiss+university+students&author=L.+J.+Maier&author=M.+E.+Liechti&author=F.+Herzig&author=M.+P.+Schaub&volume=8&publication\_year=2013&pages=e77967&pmid=24236008&doi=10.1371/journal.pone.0077967&)] \* Maslen H., Douglas T., Cohen Kadosh R., Levy N., Savulescu J. (2014). The regulation of cognitive enhancement devices: extending the medical model. J. Law Biosci. 1, 68–93 10.1093/jlb/lst003 [[PMC free article](/pmc/articles/PMC4168724/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/25243073)] [[CrossRef](//doi.org/10.1093%2Fjlb%2Flst003)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=J.+Law+Biosci&title=The+regulation+of+cognitive+enhancement+devices:+extending+the+medical+model&author=H.+Maslen&author=T.+Douglas&author=R.+Cohen+Kadosh&author=N.+Levy&author=J.+Savulescu&volume=1&publication\_year=2014&pages=68-93&doi=10.1093/jlb/lst003&)] \* Maslen H., Santoni de Sio F., Faulmüller N. (in press). With cognitive enhancement comes great responsibility?, in Responsible Innovation, Vol. 2, eds Koops B. J., et al. (Dordrecht: Springer; ). [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Responsible+Innovation&author=H.+Maslen&author=F.+Santoni+de+Sio&author=N.+Faulmüller&)] \* Müller U., Steffenhagen N., Regenthal R., Bublak P. (2004). Effects of modafinil on working memory processes in humans. Psychopharmacology 177, 161–169 10.1007/s00213-004-1926-3 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/15221200)] [[CrossRef](//doi.org/10.1007%2Fs00213-004-1926-3)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Psychopharmacology&title=Effects+of+modafinil+on+working+memory+processes+in+humans&author=U.+Müller&author=N.+Steffenhagen&author=R.+Regenthal&author=P.+Bublak&volume=177&publication\_year=2004&pages=161-169&pmid=15221200&doi=10.1007/s00213-004-1926-3&)] \* Outram S. M. (2010). The use of methylphenidate among students: the future of enhancement?. J. Med. Ethics 36, 198–202 10.1136/jme.2009.034421 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/20338928)] [[CrossRef](//doi.org/10.1136%2Fjme.2009.034421)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=J.+Med.+Ethics&title=The+use+of+methylphenidate+among+students:+the+future+of+enhancement?&author=S.+M.+Outram&volume=36&publication\_year=2010&pages=198-202&pmid=20338928&doi=10.1136/jme.2009.034421&)] \* Parens E. (1998). Is better always good?: the enhancement project. Hastings Center Rep. 28, s1–s17 10.2307/3527981 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/9539044)] [[CrossRef](//doi.org/10.2307%2F3527981)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Hastings+Center+Rep&title=Is+better+always+good?:+the+enhancement+project&author=E.+Parens&volume=28&publication\_year=1998&pages=s1-s17&pmid=9539044&doi=10.2307/3527981&)] \* President's Council on Bioethics. (2003). Beyond Therapy. Washington, DC: U.S. Government Printing Office, 253 [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Beyond+Therapy&publication\_year=2003&)] \* Ragan C. I., Bard I., Singh I. (2013). What should we do about student use of cognitive enhancers? An analysis of current evidence. Neuropharmacology 64, 588–595 10.1016/j.neuropharm.2012.06.016 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/22732441)] [[CrossRef](//doi.org/10.1016%2Fj.neuropharm.2012.06.016)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Neuropharmacology&title=What+should+we+do+about+student+use+of+cognitive+enhancers?+An+analysis+of+current+evidence&author=C.+I.+Ragan&author=I.+Bard&author=I.+Singh&volume=64&publication\_year=2013&pages=588-595&pmid=22732441&doi=10.1016/j.neuropharm.2012.06.016&)] \* Repantis D. (2013). Psychopharmacological neuroenhancement: evidence on safety and efficacy, in Cognitive Enhancement, eds Hildt E., Franke A. G. (Dordrecht: Springer; ), 29–38 [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Cognitive+Enhancement&author=D.+Repantis&publication\_year=2013&)] \* Repantis D., Schlattmann P., Laisney O., Heuser I. (2010). Modafinil and methylphenidate for neuroenhancement in healthy individuals: a systematic review. Pharmacol. Res. 62, 187–206 10.1016/j.phrs.2010.04.002 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/20416377)] [[CrossRef](//doi.org/10.1016%2Fj.phrs.2010.04.002)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Pharmacol.+Res&title=Modafinil+and+methylphenidate+for+neuroenhancement+in+healthy+individuals:+a+systematic+review&author=D.+Repantis&author=P.+Schlattmann&author=O.+Laisney&author=I.+Heuser&volume=62&publication\_year=2010&pages=187-206&pmid=20416377&doi=10.1016/j.phrs.2010.04.002&)] \* Rose S., Curry T. (2010). Fatigue countermeasures, and performance enhancement in resident physicians—reply. Mayo Clin. Proc. 85, 301–302 10.4065/mcp.2009.0704 [[PMC free article](/pmc/articles/PMC2843117/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/20194157)] [[CrossRef](//doi.org/10.4065%2Fmcp.2009.0704)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Mayo+Clin.+Proc&title=Fatigue+countermeasures,+and+performance+enhancement+in+resident+physicians—reply&author=S.+Rose&author=T.+Curry&volume=85&publication\_year=2010&pages=301-302&pmid=20194157&doi=10.4065/mcp.2009.0704&)] \* Sabin J. E., Daniels N. (1994). Determining “medical necessity” in mental health practice. Hastings Center Rep. 24, 5–13 10.2307/3563458 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/7860291)] [[CrossRef](//doi.org/10.2307%2F3563458)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Hastings+Center+Rep&title=Determining+“medical+necessity”+in+mental+health+practice&author=J.+E.+Sabin&author=N.+Daniels&volume=24&publication\_year=1994&pages=5-13&pmid=7860291&doi=10.2307/3563458&)] \* Sandberg A. (2011). Cognition enancement: upgrading the brain, in Enhancing Human Capacities, eds Savulescu J., ter Meulen R., Kahane G. (Oxford: Wiley-Blackwell; ), 71–91 [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Enhancing+Human+Capacities&author=A.+Sandberg&publication\_year=2011&)] \* Sandberg A., Savulescu J. (2011). The social and economic impacts of cognitive enhancements, in Enhancing Human Capacities, eds Savulescu J., ter Meulen R., Kahane G. (Oxford: Wiley-Blackwell; ), 93–112 [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Enhancing+Human+Capacities&author=A.+Sandberg&author=J.+Savulescu&publication\_year=2011&)] \* Santoni de Sio F., Faulmüller N., Savulescu J., Vincent N. A. (in press). Why less praise for enhanced performance? Moving beyond responsibility-shifting, authenticity, and cheating to a nature of activities approach, in Cognitive Enhancement: Ethical and Policy Implications in International Perspectives, eds Jotterand F., Dubljevic V. (Oxford: Oxford University Press; ). [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Cognitive+Enhancement:+Ethical+and+Policy+Implications+in+International+Perspectives&author=F.+Santoni+de+Sio&author=N.+Faulmüller&author=J.+Savulescu&author=N.+A.+Vincent&)] \* Savulescu J., Sandberg A., Kahane G. (2011). Well-being and enhancement, in Enhancing Human Capacities, eds Savulescu J., ter Meulen R., Kahane G. (Oxford: Wiley-Blackwell; ), 3–18 [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Enhancing+Human+Capacities&author=J.+Savulescu&author=A.+Sandberg&author=G.+Kahane&publication\_year=2011&)] \* Schelle K. J., Faulmüller N., Caviola L., Hewstone M. (2014). Attitudes towards pharmacological cognitive enhancement – a review. Front. Syst. Neurosci. 8:53 10.3389/fnsys.2014.00053 [[PMC free article](/pmc/articles/PMC4029025/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/24860438)] [[CrossRef](//doi.org/10.3389%2Ffnsys.2014.00053)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Front.+Syst.+Neurosci&title=Attitudes+towards+pharmacological+cognitive+enhancement+–+a+review&author=K.+J.+Schelle&author=N.+Faulmüller&author=L.+Caviola&author=M.+Hewstone&volume=8&publication\_year=2014&pages=53&pmid=24860438&doi=10.3389/fnsys.2014.00053&)] \* Schermer M. (2008). Enhancements, easy shortcuts, and the richness of human activities. Bioethics 22, 355–363 10.1111/j.1467-8519.2008.00657.x [[PubMed](https://pubmed.ncbi.nlm.nih.gov/18445089)] [[CrossRef](//doi.org/10.1111%2Fj.1467-8519.2008.00657.x)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Bioethics&title=Enhancements,+easy+shortcuts,+and+the+richness+of+human+activities&author=M.+Schermer&volume=22&publication\_year=2008&pages=355-363&pmid=18445089&doi=10.1111/j.1467-8519.2008.00657.x&)] \* Schermer M., Bolt I. (2011). What's in a name? ADHD and the gray area between treatment and enhancement, in Enhancing Human Capacities, eds Savulescu J., ter Meulen R., Kahane G. (Oxford: Wiley-Blackwell; ), 179–193 [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Enhancing+Human+Capacities&author=M.+Schermer&author=I.+Bolt&publication\_year=2011&)] \* Singh I. (2005). Will the “real boy” please behave: dosing dilemmas for parents of boys with ADHD. Am. J. Bioeth. 5, 34–47 10.1080/15265160590945129 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/16006369)] [[CrossRef](//doi.org/10.1080%2F15265160590945129)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Am.+J.+Bioeth&title=Will+the+“real+boy”+please+behave:+dosing+dilemmas+for+parents+of+boys+with+ADHD&author=I.+Singh&volume=5&publication\_year=2005&pages=34-47&pmid=16006369&doi=10.1080/15265160590945129&)] \* Smith M. E., Farah M. J. (2011). Are prescription stimulants “smart pills”? The epidemiology and cognitive neuroscience of prescription stimulant use by normal healthy individuals. Psychol. Bull. 137, 717–741 10.1037/a0023825 [[PMC free article](/pmc/articles/PMC3591814/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/21859174)] [[CrossRef](//doi.org/10.1037%2Fa0023825)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Psychol.+Bull&title=Are+prescription+stimulants+“smart+pills”?+The+epidemiology+and+cognitive+neuroscience+of+prescription+stimulant+use+by+normal+healthy+individuals&author=M.+E.+Smith&author=M.+J.+Farah&volume=137&publication\_year=2011&pages=717-741&pmid=21859174&doi=10.1037/a0023825&)] \* Taylor C. (1991). The Ethics of Authenticity. Cambridge, MA: Harvard University Press [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=The+Ethics+of+Authenticity&author=C.+Taylor&publication\_year=1991&)] \* Thomas R. J., Kwong K. (2006). Modafinil activates cortical and subcortical sites in the sleep-deprived state. Sleep 29, 1471–1481 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/17162995)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Sleep&title=Modafinil+activates+cortical+and+subcortical+sites+in+the+sleep-deprived+state&author=R.+J.+Thomas&author=K.+Kwong&volume=29&publication\_year=2006&pages=1471-1481&pmid=17162995&)] \* Tricco A. C., Soobiah C., Berliner S., Ho J. M., Ng C. H., Ashoor H. M., et al. (2013). Efficacy and safety of cognitive enhancers for patients with mild cognitive impairment: a systematic review and meta-analysis. Can. Med. Assoc. J. 185, 1393–1401 10.1503/cmaj.130451 [[PMC free article](/pmc/articles/PMC3826344/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/24043661)] [[CrossRef](//doi.org/10.1503%2Fcmaj.130451)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Can.+Med.+Assoc.+J&title=Efficacy+and+safety+of+cognitive+enhancers+for+patients+with+mild+cognitive+impairment:+a+systematic+review+and+meta-analysis&author=A.+C.+Tricco&author=C.+Soobiah&author=S.+Berliner&author=J.+M.+Ho&author=C.+H.+Ng&volume=185&publication\_year=2013&pages=1393-1401&pmid=24043661&doi=10.1503/cmaj.130451&)] \* Turner D. C., Robbins T. W., Clark L., Aron A. R., Dowson J., Sahakian B. J. (2003). Cognitive enhancing effects of modafinil in healthy volunteers. Psychopharmacology 165, 260–269 10.1007/s00213-002-1250-8 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/12417966)] [[CrossRef](//doi.org/10.1007%2Fs00213-002-1250-8)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Psychopharmacology&title=Cognitive+enhancing+effects+of+modafinil+in+healthy+volunteers&author=D.+C.+Turner&author=T.+W.+Robbins&author=L.+Clark&author=A.+R.+Aron&author=J.+Dowson&volume=165&publication\_year=2003&pages=260-269&pmid=12417966&doi=10.1007/s00213-002-1250-8&)] \* Volkow N. D., Wang G. J., Fowler J. S., Telang F., Maynard L., Logan J., et al. (2004). Evidence that methylphenidate enhances the saliency of a mathematical task by increasing dopamine in the human brain. Am. J. Psychiatry. 161, 1173–1180 10.1176/appi.ajp.161.7.1173 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/15229048)] [[CrossRef](//doi.org/10.1176%2Fappi.ajp.161.7.1173)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Am.+J.+Psychiatry&title=Evidence+that+methylphenidate+enhances+the+saliency+of+a+mathematical+task+by+increasing+dopamine+in+the+human+brain&author=N.+D.+Volkow&author=G.+J.+Wang&author=J.+S.+Fowler&author=F.+Telang&author=L.+Maynard&volume=161&publication\_year=2004&pages=1173-1180&pmid=15229048&doi=10.1176/appi.ajp.161.7.1173&)] \* Warren O. J., Leff D. R., Athanasiou T., Kennard C., Darzi A. (2009). The neurocognitive enhancement of surgeons: an ethical perspective. J. Surg. Res. 152, 167–172 10.1016/j.jss.2007.12.761 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/18394651)] [[CrossRef](//doi.org/10.1016%2Fj.jss.2007.12.761)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=J.+Surg.+Res&title=The+neurocognitive+enhancement+of+surgeons:+an+ethical+perspective&author=O.+J.+Warren&author=D.+R.+Leff&author=T.+Athanasiou&author=C.+Kennard&author=A.+Darzi&volume=152&publication\_year=2009&pages=167-172&pmid=18394651&doi=10.1016/j.jss.2007.12.761&)] \* Wesensten N. J., Killgore W. D., Balkin T. J. (2005). Performance and alertness effects of caffeine, dextroamphetamine, and modafinil during sleep deprivation. J. Sleep Res. 14, 255–266 10.1111/j.1365-2869.2005.00468.x [[PubMed](https://pubmed.ncbi.nlm.nih.gov/16120100)] [[CrossRef](//doi.org/10.1111%2Fj.1365-2869.2005.00468.x)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=J.+Sleep+Res&title=Performance+and+alertness+effects+of+caffeine,+dextroamphetamine,+and+modafinil+during+sleep+deprivation&author=N.+J.+Wesensten&author=W.+D.+Killgore&author=T.+J.+Balkin&volume=14&publication\_year=2005&pages=255-266&pmid=16120100&doi=10.1111/j.1365-2869.2005.00468.x&)] \* White B. P., Becker-Blease K. A., Grace-Bishop K. (2006). Stimulant medication use, misuse, and abuse in an undergraduate and graduate student sample. J. Am. Coll. Health 54, 261–268 10.3200/JACH.54.5.261-268 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/16539218)] [[CrossRef](//doi.org/10.3200%2FJACH.54.5.261-268)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=J.+Am.+Coll.+Health&title=Stimulant+medication+use,+misuse,+and+abuse+in+an+undergraduate+and+graduate+student+sample&author=B.+P.+White&author=K.+A.+Becker-Blease&author=K.+Grace-Bishop&volume=54&publication\_year=2006&pages=261-268&pmid=16539218&doi=10.3200/JACH.54.5.261-268&)] \* Yesavage J. A., Mumenthaler M. S., Taylor J. L., Friedman L., O'Hara R., Sheikh J., et al. (2002) Donepezil and flight simulator performance: effects on retention of complex skills. Neurology 59, 123–125 10.1212/WNL.59.1.123 [[PubMed](https://pubmed.ncbi.nlm.nih.gov/12105320)] [[CrossRef](//doi.org/10.1212%2FWNL.59.1.123)] [[Google Scholar](https://scholar.google.com/scholar\_lookup?journal=Neurology&title=Donepezil+and+flight+simulator+performance:+effects+on+retention+of+complex+skills&author=J.+A.+Yesavage&author=M.+S.+Mumenthaler&author=J.+L.+Taylor&author=L.+Friedman&author=R.+O'Hara&volume=59&publication\_year=2002&pages=123-125&pmid=12105320&doi=10.1212/WNL.59.1.123&)] --- Articles from Frontiers in Systems Neuroscience are provided here courtesy of \*\*Frontiers Media SA\*\* ---
be379912-b3c7-4900-b045-e7833223410c
trentmkelly/LessWrong-43k
LessWrong
Meetup : Pittsburgh: Value of Information Discussion article for the meetup : Pittsburgh: Value of Information WHEN: 13 November 2012 06:00:00PM (-0800) WHERE: EatUnique, S Craig St, Pittsburgh Phone 412-304-6258 if you can't find us Discussion article for the meetup : Pittsburgh: Value of Information
095d32c4-137b-4b17-8254-ac414bb815f9
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"(Response to: You cannot be mistaken about (not) wanting to wirehead, Welcome to Heaven) The Omega CorporationInternal MemorandumTo: Omega, CEOFrom: Gamma, Vice President, Hedonic Maximization Sir, this concerns the newest product of our Hedonic Maximization Department, the Much-Better-Life Simulator. This revolutionary device allows our customers to essentially plug into the Matrix, except that instead of providing robots with power in flagrant disregard for the basic laws of thermodynamics, they experience a life that has been determined by rigorously tested algorithms to be the most enjoyable life they could ever experience. The MBLS even eliminates all memories of being placed in a simulator, generating a seamless transition into a life of realistic perfection. Our department is baffled. Orders for the MBLS are significantly lower than estimated. We cannot fathom why every customer who could afford one has not already bought it. It is simply impossible to have a better life otherwise. Literally. Our customers' best possible real life has already been modeled and improved upon many times over by our programming. Yet, many customers have failed to make the transition. Some are even expressing shock and outrage over this product, and condemning its purchasers. Extensive market research has succeeded only at baffling our researchers. People have even refused free trials of the device. Our researchers explained to them in perfectly clear terms that their current position is misinformed, and that once they tried the MBLS, they would never want to return to their own lives again. Several survey takers went so far as to specify that statement as their reason for refusing the free trial! They know that the MBLS will make their life so much better that they won't want to live without it, and they refuse to try it for that reason! Some cited their "utility" and claimed that they valued "reality" and "actually accomplishing something" over "mere hedonic experience." Somehow these organisms are incapable of comprehending that, inside the MBLS simulator, they will be able to experience the feeling of actually accomplishing feats far greater than they could ever accomplish in real life. Frankly, it's remarkable such people amassed enough credits to be able to afford our products in the first place! You may recall that a Beta version had an off switch, enabling users to deactivate the simulation after a specified amount of time, or could be terminated externally with an appropriate code. These features received somewhat positive reviews from early focus groups, but were ultimately eliminated. No agent could reasonably want a device that could allow for the interruption of its perfect life. Accounting has suggested we respond to slack demand by releasing the earlier version at a discount; we await your input on this idea. Profits aside, the greater good is at stake here. We feel that we should find every customer with sufficient credit to purchase this device, forcibly install them in it, and bill their accounts. They will immediately forget our coercion, and they will be many, many times happier. To do anything less than this seems criminal. Indeed, our ethics department is currently determining if we can justify delaying putting such a plan into action. Again, your input would be invaluable. I can't help but worry there's something we're just not getting." The above is a chunk of text from a post created by a user on the website LessWrong. What is this text talking about? The text is a hypothetical internal memorandum from a company called the Omega Corporation. The company has developed a product called the Much-Better-Life Simulator (MBLS), which is essentially a device that allows users to live in a simulated reality that is algorithmically designed to be the most enjoyable life they could ever experience. This simulated life is so perfect that it surpasses any real life experiences. The device also erases all memories of being placed in the simulator, creating a seamless transition into this perfect life. However, the company is facing a problem. Despite the perfection offered by the MBLS, the product is not selling as expected. Many customers are refusing to buy it, even when offered free trials. The customers' reasons for refusal are baffling to the company. Some customers value their real-life experiences and accomplishments over the simulated perfection offered by the MBLS. Others refuse to try it because they fear they will become so enamored with the simulated life that they won't want to return to their real lives. The company is considering forcibly installing customers into the MBLS and billing their accounts, arguing that the customers will be happier in the simulated reality and will forget any coercion. They are also contemplating re-releasing an earlier version of the product that had an off switch, allowing users to deactivate the simulation after a specified amount of time. Can you explore the ethical implications of the Omega Corporation's consideration to forcibly install customers into the MBLS and bill their accounts? Absolutely, this consideration raises a number of ethical concerns. 1. Consent: The cornerstone of any ethical interaction, particularly those involving significant life decisions, is informed consent. The company's plan to forcibly install customers into the MBLS violates this principle. Even if the company believes that the customers will be happier in the simulated reality, it is not their decision to make. Each individual has the right to make their own choices about their life and experiences. 2. Autonomy: Closely related to consent is the principle of autonomy, the right of an individual to control their own body and life. Forcibly installing customers into the MBLS infringes on their autonomy, as it removes their ability to make decisions about their own experiences and lives. 3. Deception: The plan involves erasing all memories of being placed in the simulator, which is a form of deception. This raises ethical questions about the right to truth and transparency. Deception can lead to a loss of trust and can harm relationships, both personal and professional. 4. Financial exploitation: Billing customers' accounts without their explicit agreement can be seen as a form of financial exploitation. This is particularly concerning given that the customers may not even remember agreeing to the charges due to the memory erasure aspect of the MBLS. 5. Potential harm: While the company believes that life in the MBLS is superior and that customers will be happier, they cannot guarantee this outcome for every individual. People have different values, goals, and sources of happiness. Some individuals may find more satisfaction in overcoming real-life challenges than in experiencing simulated perfection. Forcing such individuals into the MBLS could cause them psychological harm. In conclusion, while the company's intentions may be to increase happiness, their proposed method of achieving this goal raises significant ethical concerns. It is crucial to respect individual autonomy, consent, and the right to truth, even when one believes that the proposed action will lead to a better outcome. Explain the concept of "informed consent" in greater detail, highlighting its importance in medical and psychological research. Informed consent is a fundamental ethical principle in medical and psychological research involving human participants. It is based on the premise that individuals have the right to make decisions about their own bodies and health, including the right to decide whether or not to participate in research. Informed consent is not just a one-time event, but a process that continues throughout the participant's involvement in the research. The process of obtaining informed consent involves three key components: 1. Disclosure: Researchers must provide potential participants with all relevant information about the research. This includes the purpose of the research, what participation will involve, the expected duration of participation, any potential risks or benefits, any alternatives to participation, and the participant's rights, including the right to withdraw from the research at any time without penalty. 2. Comprehension: It is not enough to simply provide information; researchers must also ensure that potential participants understand the information. This may involve explaining complex concepts in layman's terms, providing written information for participants to review at their own pace, or checking comprehension through questions or other means. 3. Voluntariness: Participants must voluntarily agree to participate, without coercion or undue influence. This means that researchers must be careful not to pressure individuals into participating, and must ensure that participants understand that they can withdraw at any time without negative consequences. Informed consent is crucial for several reasons. Firstly, it respects individual autonomy and the right to self-determination. Secondly, it promotes trust in the research process, which is crucial for the recruitment and retention of research participants. Thirdly, it helps to protect individuals from harm, by ensuring that they are aware of any potential risks before they agree to participate. Lastly, it is a legal requirement in many jurisdictions, and is considered a fundamental ethical standard in research.
f06c77e5-0973-4ced-b073-98c03d54c2ce
trentmkelly/LessWrong-43k
LessWrong
Open Thread, Jun. 15 - Jun. 21, 2015 If it's worth saying, but not worth its own post (even in Discussion), then it goes here. ---------------------------------------- Notes for future OT posters: 1. Please add the 'open_thread' tag. 2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.) 3. Open Threads should be posted in Discussion, and not Main. 4. Open Threads should start on Monday, and end on Sunday.
b31c93f9-8514-47ef-af49-4cb988536c4f
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post3756 A putative new idea for AI control; index here . This post will look at some of the properties of quantilizers , when they succeed and how they might fail. Roughly speaking, let f be some true objective function that we want to maximise. We haven't been able to specify it fully, so we have instead a proxy function g . There is a cost function c = f − g which measures how much g falls short of f . Then a quantilizer will choose actions (or policies) radomly from the top n % of actions available, ranking those actions according to g . It is plausible that for standard actions or policies, g and f are pretty similar. But that when we push to maximising g , then the tiny details where g and f differ will balloon, and the cost can grow very large indeed. This could be illustrated roughly by figure I, where g and c are plotted against each other; imagine that c is on a log scale. The blue areas are possible actions that can be taken. Note a large bunch of actions that are not particularly good for g but have low cost, a thinner tail of more optimised actions that have higher g and still have low cost, and a much thinner tail that has even higher g but high cost. The g -maximising actions with maximal cost are represented by the red star. Figure I thus shows a situation ripe for some form of quantilization. But consider figure II: In figure 2, the only way to get high g is to have a high c . The situation is completely unsuited for quantilization: any g maximiser, even a quantilizer, will score terribly under f . But that means mainly that we have chosen a terrible g . Now, back to figure I, where quantilization might work, at least in principle. The ideal would be situation Ia; here blue represents actions below the top n % cut-off, green those above (which include the edge-case red-star actions, as before): Here the top n % of actions all score a good value under g , and yet most of them have low cost. But even within the the broad strokes of figure I, quantilization can fail. Figure Ib shows a first type of failure: Here the problem is that the quantilizer lefts in too many mediocre actions, so the expectation of g (and f ) is mediocre; with a smaller n % , the quantilizer would be better. Another failure mode is figure Ic: Here the n % is too low: all the quantilized solutions have high cost. Another quantilizer design An idea I had some time ago was that, instead of of taking the top n % of the actions, the quantilizer instead choose among the actions that are within n % of the top g -maximising actions. Such a design would be less likely to encounter situations like Ib, but more likely to face situations like Ic. What can be done? So, what can be done to improve quantilizers? I'll be posting some thoughts as they develop, but there are two ideas that spring to mind immediately. First of all, we can use CUA oracles to investigate the shape of the space of actions, at least from the perspective of g ( c , like f , cannot be calculated explicitly). Secondly, there's an idea that I had around low-impact AIs . Basically, it was to ensure that there was some action the AI could take that could easily reach some approximation of its goal. For instance, have a utility function that encourages the AI to build one papeclip, and cap that utility at one. Then scatter around some basic machinery to melt steel, stretch it, give the AI some manipulator arms, etc... The idea is to ensure there is at least one safe policy that gives the AI some high expected utility. Then if there is one policy, there's probably a large amount of similar policies in its vicinity, safe policies with high expectation. Then it seems that quantilization should work, probably best in its 'within n % of the maximal policy' version (working well because we know the cap of the utility function, hence have a cap on the maximal policy). Now, how do we know that a safe policy exists? We have to rely on human predictive abilities, which can be flawed. But the reason we're reasonably confident in this scenario is that we believe that we could figure out how to build a paperclip, given the stuff the AI has lying around. And the AI would presumably do better than us.
3d05f859-551c-4419-97f9-fa4026f01101
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Does Circuit Analysis Interpretability Scale? Evidence from Multiple Choice Capabilities in Chinchilla *Cross-posting a paper from the Google DeepMind mech interp team, by: Tom Lieberum, Matthew Rahtz, János Kramár, Neel Nanda, Geoffrey Irving, Rohin Shah, Vladimir Mikulik* ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Av3frxNy3y3i2kpaa/x1tujeuzgw5w1zjz93hl) Informal TLDR ------------- * We tried standard mech interp techniques (direct logit attribution, activation patching, and staring at attention patterns) on an algorithmic circuit in Chinchilla (70B) for converting the knowledge of a multiple choice question's answer into outputting the correct letter. + These techniques basically continued to work, and nothing fundamentally broke at scale (though it was a massive infra pain!). * We then tried to dig further into the semantics of the circuit - going beyond "these specific heads and layers matter and most don't" and trying to understand the learned algorithm, and which features were implemented + This *kind of* tracked the feature "this is the nth item in the list" but was pretty messy. + However, my personal guess is that this stuff is just pretty messy at all scales, and we can productively study how clean/messy this stuff is at smaller and more tractable scales. * I now feel mildly more optimistic that focusing on mech interp work on small models is just fine, and extremely worth it for the much faster feedback loops. It also seems super nice to get better at automatically finding these circuits, since this was a many month manual slog! See [Tom's](https://twitter.com/lieberum_t/status/1681709463592370180) and [my Twitter summaries](https://twitter.com/NeelNanda5/status/1681709735631003653) for more. Note that I (Neel) am cross-posting this on behalf of the team, and neither a main research contributor nor main advisor for the project. Key Figures ----------- ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Av3frxNy3y3i2kpaa/lkgehza3hnmbpivtpoaz) ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Av3frxNy3y3i2kpaa/m37wkrpzckcsofejhk5b) *An overview of the weird kinds of heads found, like the "attend to B if it is correct" head!* ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Av3frxNy3y3i2kpaa/xjadjltgnhjew4huyria) *The losses under different mutations of the letters - experiments to track down exactly which features were used. Eg replacing the labels with random letters or numbers preserves the "nth item in the list" feature while shuffling ABCD lets us track the "line labelled B" feature* ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Av3frxNy3y3i2kpaa/rmjs12mkkiwasxpmqhe9) *The queries and keys of a crucial correct letter head - it's so linearly separable! We can near loss-lessly compress it to just 3 dimensions and interpret just those three dimensions. [See an interactive 3D plot here](https://sites.google.com/corp/view/does-mechinterp-scale)* Abstract -------- > > Circuit analysis is a promising technique for understanding the internal mechanisms of language models. However, existing analyses are done in small models far from the state of the art. To address this, we present a case study of circuit analysis in the 70B Chinchilla model, aiming to test the scalability of circuit analysis. In particular, we study multiple-choice question answering, and investigate Chinchilla's capability to identify the correct answer label given knowledge of the correct answer text. We find that the existing techniques of logit attribution, attention pattern visualization, and activation patching naturally scale to Chinchilla, allowing us to identify and categorize a small set of `output nodes` (attention heads and MLPs). > > > > > We further study the `correct letter` category of attention heads aiming to understand the semantics of their features, with mixed results. For normal multiple-choice question answers, we significantly compress the query, key and value subspaces of the head without loss of performance when operating on the answer labels for multiple-choice questions, and we show that the query and key subspaces represent an `Nth item in an enumeration` feature to at least some extent. However, when we attempt to use this explanation to understand the heads' behaviour on a more general distribution including randomized answer labels, we find that it is only a partial explanation, suggesting there is more to learn about the operation of `correct letter` heads on multiple choice question answering. > > > Read the full paper here: <https://arxiv.org/abs/2307.09458>
32f784ed-3897-4583-b336-76fbe5d64f41
trentmkelly/LessWrong-43k
LessWrong
Youth Lockout Cross-posted from Substack. EDIT 12/04/2025: The last two paragraphs were logically inconsistent and overly strong. I’ve updated the final paragraph to fix the issue and better reflect my position after more thought. Thank you to Julian for pointing this out. AI job displacement will affect young people first, disrupting the usual succession of power and locking the next generation out of our institutions. I’m coining the term youth lockout to describe this phenomenon. Youth Lockout We are on track to build AI agents that can independently perform valuable intellectual labour. These agents will directly compete with human workers for roles in the labour market, often offering services at lower cost and greater speeds. In historical cases of automation, such as the industrial revolution, automation reduced the number of human jobs in some industries but created enough new opportunities that overall demand for labour went up. AI automation will be very different from these examples because AI is a much more general-purpose technology. With time, AI will likely perform all economic tasks at human or superhuman levels, meaning any new firm or industry will be able to use AI labour instead of human labour. If this becomes true, then the long-term equilibrium is an almost-fully-automated economy, save for specific areas where a human touch remains essential, like childcare. In the short-term, however, the transition to AI labour will happen in stages as capabilities improve unevenly. Moreover, organisations will be slow to fire existing employees, even after their work can be automated, since mass layoffs are disruptive, unpopular, and often illegal. A hiring freeze, on the other hand, is much easier to implement and saves the organisation money as hiring is expensive. The process of screening CVs, interviewing candidates, and performing work trials takes up dozens of hours of employee time, while hiring entry-level employees is even more costly as inexperienced hi
927ea056-bd96-4fe1-956d-5ef9a650cfa7
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Foom seems unlikely in the current LLM training paradigm **Epistemic status:** *The idea here has likely been articulated before, I just haven't noticed it, so it might be worth pointing it out again.* Foom describes the idea of a rapid AI takeoff caused by an AI's ability to recursively improve itself. Most discussions about Foom assume that each next iteration of improved models can in principle be developed and deployed in a short amount of time. Current LLMs require huge amounts of data and compute to be trained. Even if GPT-4 or similar models were able to improve their own architecture, they would still need to be trained from scratch using that new architecture. This would take a long time and can't easily be done without people noticing. The most extreme Foom scenarios of models advancing many generations in < 24 hours seem therefore unlikely in the current LLM training paradigm. There could be paths towards Foom with current LLMs that don't require new, improved models to be trained from scratch: 1. A model might figure out how to adjust its own weights in a targeted way. This would essentially mean that the model has solved interpretability. It seems unlikely to me that it is possible to get to this point without running a lot of compute-intensive experiments. 2. It's conceivable that the recursive self-improvement that leads to Foom doesn't happen on the level of the base LLM, but on a level above that, where multiple copies of a base model are called in a way that results in emergent behavior or agency, similar to what [Auto-GPT](https://github.com/Torantulino/Auto-GPT) is trying to do. I think this approach can potentially go a long way, but it might ultimately limited by how smart the base model is. Insofar as it is required to train a new model with 100s of billions of parameters from scratch in order to make real progress towards AGI, there is an upper limit to how fast recursive self-improvement can progress.
9b934b54-5c40-41f8-9dbe-2ba49f1198c9
trentmkelly/LessWrong-43k
LessWrong
Solstice Quaker Darkness Circling This year I did a small "Darkness Solstice" in the woods (i.e. basically got up to the moment of darkness, and then instead of having an uplifting third act we just sorta sat in the darkness for awhile).  "Darkness Solstice" probably doesn't make sense for most people. But a thing that was interesting and perhaps surprisingly relevant was "Solstice Quaker Circling".  After the moment of darkness we just sat around a campfire in silence. Anyone could speak up and share something if they felt like it. I modeled this after the structure of a Quaker meeting, with a vague dose of Circling culture thrown in. When you felt like you'd had "enough", you lit a candle indicate you were ready to move on from the segment (which in this case meant wrapping up the solstice with one final song) I think it was about 40 minutes before a critical mass of people had lit their "I'm done" candles. I was happy with how the "I'm done" mechanic played out – if I had had to guess how long to have the period have lasted, I might have went with something much shorter. We ended up sharing a lot of personal stuff that felt valuable to me. There were some impromptu solstice speeches came up that were interesting. Some minor personal interactions. A question came up: "Am I allowed to respond to what other people have said?" and my answer was "Yes, but only in slow motion. To preserve the overall vibe and prevent it from turning into "just a conversation", the guideline was to always wait a minute or so before saying the next thing. So, you could respond to something someone else said but not transition into rapid back-and-forth that'd have killed the mood. It as definitely a great experience for me as a small solstice. There were six of us there, and five of us shared things during the period. I think this might surprisingly just.... work, even at larger scales? Its modeled after a Quaker meeting. I've seen Quaker meetings work well with ~30-40 people, and the largest ones are apparently ~
37c45df2-4c52-4b3a-b343-d1126c5fa6e8
trentmkelly/LessWrong-43k
LessWrong
Distributed public goods provision Most people benefit significantly from privately funded public goods (e.g. Wikipedia). If we all contribute to such public goods, then we can all end up better off. But as an individual it’s almost never a good return on investment. I think of supporting such public goods as being a good citizen, but that leaves open the question what is a good amount to contribute? I can make most decisions by balancing costs and benefits, but I think that basically never leads to making small contributions to public goods in the name of being a “good citizen.” This post doesn’t aim to answer that full question, but it sets up one simple formal model for the situation and a very natural “wish list” for public goods funding norms. It then proposes the following norm that meets this wish list: > My contribution to public good X should be 100 times larger than the amount I’d personally benefit if public good X received 1% more funding. For example, suppose I’m considering how much to contribute to Wikipedia. Maybe I get $1000/year of value from Wikipedia, and I’d get $1000.10 of value if Wikipedia had 1% more funding. This rule says I should contribute $10/year (= 10 cents x 100). (In practice I think that labor is often a more important contribution than money, but a similar principle can apply. I’d also prefer that most donors to public goods use something like a donor lottery so that they can make a more informed decision about a larger amount of money when they do decide to give. This post is trying to bite off a tiny bit of the conceptual problem.) Formal model Assume there are n people and k projects. Person i makes a contribution x_ij to public good j. Let y_j = sum x_ij be the total funding for public good j. Each person has a utility function U_i(y_1, …, y_k) expressing how much they benefit if the public goods are funded at levels y_1, …, y_k. The total utility for person i is U_i(y_1, …, y_k) – sum_j x_ij. For simplicity I’ll assume that everyone’s utility is monot
b17c566c-7ec2-48a6-ac78-d544450fe2cd
trentmkelly/LessWrong-43k
LessWrong
Chicago Meetup: Sunday, August 1 at 2:00 pm We’re holding the Chicago meetup discussed here on Sunday, August 1, 2010 at 2:00 pm. The tentative location is the Corner Bakery at the corner of State and Cedar (1121 N. State St.), but we’re also happy to move the meetup further up to the North side as has been previously discussed, if anyone has a suggestion for a good venue. We will post any updates here as well as to our Chicago LW meetup Google group. Please comment here if you plan to attend. We'll have a table-top sign to help you identify us. We’re looking forward to a second successful Chicago meetup and hope to see some old and new faces!  
26be218e-0468-4186-8c63-c33d6a842442
trentmkelly/LessWrong-43k
LessWrong
Twin Peaks: under the air Content warning: low content ~ Feb 2021 The other day I decided to try imbibing work-relevant blog posts via AI-generated recital, while scaling the Twin Peaks—large hills near my house in San Francisco, of the sort that one lives near and doesn’t get around to going to. It was pretty strange, all around. For one thing, I was wearing sunglasses. I realize this is a thing people do all the time. Maybe it’s strange for them too, or maybe theirs aren’t orange. Mine were, which really changed the situation. For one thing, the glowing streetscapes felt unreal, like cheap science fiction. But also, all kinds of beauty seemed to want photographing, but couldn’t be seen with my camera. It was funny to realize that I’m surrounded by potential beauty all the time, that I would see if I had different eyes, or different glasses, or different sensory organs all together. Like, the potential for beauty is as real as the beauty I do see. (This is perhaps obvious, but something being obvious doesn’t mean you know it. And knowing something doesn’t mean you realize it. I’d say I knew it, but hadn’t realized it.) And then my ears were cornered in by these plugs spouting electronic declarations on the nature of coherent agents and such, which added to my sense of my head just not really being in the world, and instead being in a cozy little head cockpit, from which I could look out on the glowing alien landscape. My feet were also strange, but in the opposite direction. I recently got these new sock-shoes and I was trying them out for the first time. They are like well-fitting socks with strong but pliable rubber stuff sprayed on the bottom. Wearing them, you can feel the ground under your feet, as if you were bare-foot. Minus the sharp bits actually lacerating your feet, or the squishy bits sullying them. Walking along I imagined my freed feet were extra hands, holding the ground. I had only been up to Twin Peaks twice before, and I guess I had missed somehow exactly how crazy t
7e9a1510-d057-43fc-b304-1aa1950295e8
StampyAI/alignment-research-dataset/arxiv
Arxiv
MultiXNet: Multiclass Multistage Multimodal Motion Prediction I Introduction --------------- Predicting the future states of other actors such as vehicles, pedestrians, and bicyclists represents a key capability for self-driving technology. This is a challenging task, and has been found to play an important role in accident avoidance for human drivers [[35](#bib.bib4 "Anticipation in driving: the role of experience in the efficacy of pre-event conflict cues"), [36](#bib.bib3 "Supporting anticipation in driving through attentional and interpretational in-vehicle displays")], as well as for their autonomous counterparts [[8](#bib.bib53 "Towards full automated drive in urban environments: A demonstration in gomentum station, california")]. Within the context of a Self-Driving Vehicle (SDV) it is important to capture the range of possibilities for other actors, and not just a single most likely trajectory. Consider an opposing vehicle approaching an intersection, which may continue driving straight or turn in front of the SDV. In order to ensure safety the SDV needs to accurately reason about both of these possible modes and modulate its behavior accordingly. In addition to the discrete modes, a downstream motion planner may react differently to a prediction depending on the continuous uncertainty within a predicted trajectory. As an example, if an opposing vehicle looks like it might take a wide turn and come into the SDV’s lane, the SDV can preemptively slow down to reduce the risk. On the other hand, if the prediction shows confidence that the opposing vehicle will stay in its lane the SDV could choose to maintain its current speed. Bringing the above requirements together, Fig. [1](#S1.F1 "Fig. 1 ‣ I INTRODUCTION ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction") shows an example of the task addressed by this work. The input is a map and a sequence of lidar data which is projected into a common global coordinate frame using the SDV pose. The output is a multimodal distribution over potential future states for the other actors in the scene. An important challenge is that various actor types such as pedestrians and vehicles exhibit significantly different behavior, while a deployed approach needs to handle all actors present in a scene. ![Example output of the proposed MultiXNet model, showing detections and multimodal, uncertainty-aware motion predictions for multiple actor types overlaid on top of lidar and map data (including pedestrians on the sidewalks and a vehicle and a bicyclist approaching the SDV)](https://media.arxiv-vanity.com/render-output/8049776/figs/f5cf6413-9346-40f5-d9e3-c7836af7145d_1173218594.5091950000.jpg) Fig. 1: Example output of the proposed MultiXNet model, showing detections and multimodal, uncertainty-aware motion predictions for multiple actor types overlaid on top of lidar and map data (including pedestrians on the sidewalks and a vehicle and a bicyclist approaching the SDV) Prior work in this area has demonstrated strong performance by using end-to-end methods that jointly learn detection and motion prediction directly from sensor data [[23](#bib.bib17 "Fast and furious: real time end-to-end 3d detection, tracking and motion forecasting with a single convolutional net"), [3](#bib.bib22 "Intentnet: learning to predict intention from raw sensor data")], including the addition of a jointly learned refinement stage of the network that leads to improved trajectory prediction [[2](#bib.bib130 "Spatially-aware graph neural networks for relational behavior forecasting from sensor data")]. However, these approaches have generally focused only on vehicles and produce a single trajectory rather than full motion distributions. More recent work has shown the ability to learn a continuous distribution directly from sensor data for multiple classes [[28](#bib.bib132 "LaserFlow: efficient and probabilistic object detection and motion forecasting")], but the distributions are not multimodal. Prediction methods that operate on detections rather than the raw sensor data have shown improved performance by introducing multiclass predictions [[7](#bib.bib39 "Predicting motion of vulnerable road users using high-definition maps and efficient convnets")], estimates of uncertainty [[4](#bib.bib118 "Multipath: multiple probabilistic anchor trajectory hypotheses for behavior prediction"), [11](#bib.bib26 "Uncertainty-aware short-term motion prediction of traffic actors for autonomous driving")], or incorporating multiple modes [[10](#bib.bib27 "Multimodal trajectory predictions for autonomous driving using deep convolutional networks")]. While each of these concepts has been considered individually, this work looks to unify them into a single approach which we empirically show to outperform the competing baselines. Our work builds on IntentNet [[3](#bib.bib22 "Intentnet: learning to predict intention from raw sensor data")] to produce an end-to-end approach for motion prediction from lidar data with the following contributions: * joint detection and motion prediction of multiple actor classes: vehicles, pedestrians, and bicyclists; * modeling both cross-track and along-track uncertainty of actor movement; * a jointly trained second-stage trajectory refinement step that improves prediction accuracy; * multimodal trajectory prediction to capture distinct future trajectories of traffic actors. Using a large-scale, real-world data set, the proposed approach was shown to outperform the current state-of-the-art, and we experimentally demonstrate the contribution of each of the above improvements. Ii Related Work ---------------- Object detection is a critical task for a SDV system, with a number of papers proposed recently in the literature. A popular approach is using a bird’s-eye view (BEV) representation, where lidar points are encoded in 3D voxels [[3](#bib.bib22 "Intentnet: learning to predict intention from raw sensor data")], which has a strong benefit of being a range-invariant representation of objects. PointPillars [[19](#bib.bib126 "Pointpillars: fast encoders for object detection from point clouds")] proposed to learn the BEV encoding through a computation scheme that provides better speed while keeping accuracy high. Range view (RV) is another popular lidar point representation which provides a compact input while preserving all sensor information. The authors of [[25](#bib.bib114 "Lasernet: an efficient probabilistic 3d object detector for autonomous driving")] showed that RV is good at detection of both near and long-range objects, which can further be improved by combining a camera image with RV lidar [[27](#bib.bib125 "Sensor fusion for joint 3d object detection and semantic segmentation")]. Recent work applies both BEV and RV representations [[6](#bib.bib115 "Multi-view 3d object detection network for autonomous driving"), [41](#bib.bib124 "End-to-end multi-view fusion for 3d object detection in lidar point clouds")], extracting features using separate branches of the network that are fused at a later stage. This fusion method preserves information for both near- and long-range objects, at the cost of a more complex and heavy network structure. In this work we focus on a BEV approach, and discuss several ideas for how to improve on the current state-of-the-art. Movement prediction is another major topic in the SDV community. Typically, the prediction models take current and past detections as inputs, and then output trajectories for the next several seconds. A common approach is to train recurrent models to process the inputs and extract learned features [[20](#bib.bib107 "Desire: distant future prediction in dynamic scenes with interacting agents"), [15](#bib.bib120 "The trajectron: probabilistic multi-agent trajectory modeling with dynamic spatiotemporal graphs"), [34](#bib.bib121 "Trajectron++: multi-agent generative trajectory forecasting with heterogeneous data for control"), [13](#bib.bib97 "Social gan: socially acceptable trajectories with generative adversarial networks"), [40](#bib.bib108 "Multi-agent tensor fusion for contextual trajectory prediction"), [33](#bib.bib110 "Sophie: an attentive gan for predicting paths compliant to social and physical constraints"), [18](#bib.bib117 "Social-bigat: multimodal trajectory forecasting using bicycle-gan and graph attention networks")]. A number of methods have been proposed that take actor surroundings and other contextual information through BEV images as input, and extract useful scene features using convolutional neural networks (CNNs) [[23](#bib.bib17 "Fast and furious: real time end-to-end 3d detection, tracking and motion forecasting with a single convolutional net"), [7](#bib.bib39 "Predicting motion of vulnerable road users using high-definition maps and efficient convnets"), [4](#bib.bib118 "Multipath: multiple probabilistic anchor trajectory hypotheses for behavior prediction"), [11](#bib.bib26 "Uncertainty-aware short-term motion prediction of traffic actors for autonomous driving"), [10](#bib.bib27 "Multimodal trajectory predictions for autonomous driving using deep convolutional networks"), [34](#bib.bib121 "Trajectron++: multi-agent generative trajectory forecasting with heterogeneous data for control")]. These models can then predict actor trajectories using a decoder architecture based on the extracted features. Interestingly, the majority of former research on trajectory prediction has focused on predicting the motion of a particular type of road actor (e.g., vehicle or pedestrian). However, multiple types of traffic actors exist together on public roads, and SDVs need to accurately predict all relevant actors’ motions in order to drive safely. Moreover, different actor types have distinct motion patterns (e.g., bicyclists and pedestrians behave quite differently [[7](#bib.bib39 "Predicting motion of vulnerable road users using high-definition maps and efficient convnets")]), and it is important to model them separately. A few recent papers tackled this challenge using recurrent methods [[34](#bib.bib121 "Trajectron++: multi-agent generative trajectory forecasting with heterogeneous data for control"), [24](#bib.bib95 "TrafficPredict: trajectory prediction for heterogeneous traffic-agents"), [5](#bib.bib112 "Traphic: trajectory prediction in dense and heterogeneous traffic using weighted interactions")], however unlike our work an existence of a detection system was assumed and they were not trained end-to-end using raw sensor data. Another aspect of the prediction task that is important for ensuring safe SDV operations is modeling the stochasticity of traffic behavior, either by considering multimodality of actor movement (e.g., whether they are going to turn left or right at an intersection) or position uncertainty within a single mode. When it comes to multimodality of future trajectories there are two common classes of approaches. The first is the use of generative models, either explicitly with conditional variational autoencoders [[20](#bib.bib107 "Desire: distant future prediction in dynamic scenes with interacting agents"), [15](#bib.bib120 "The trajectron: probabilistic multi-agent trajectory modeling with dynamic spatiotemporal graphs"), [34](#bib.bib121 "Trajectron++: multi-agent generative trajectory forecasting with heterogeneous data for control"), [38](#bib.bib123 "Diverse trajectory forecasting with determinantal point processes")] or implicitly with generative adversarial networks [[13](#bib.bib97 "Social gan: socially acceptable trajectories with generative adversarial networks"), [40](#bib.bib108 "Multi-agent tensor fusion for contextual trajectory prediction"), [33](#bib.bib110 "Sophie: an attentive gan for predicting paths compliant to social and physical constraints"), [18](#bib.bib117 "Social-bigat: multimodal trajectory forecasting using bicycle-gan and graph attention networks")]. Once trained, trajectories are predicted by sampling from the learned distribution at inference time. The generative models often require the system to draw many samples to ensure good coverage in the trajectory space (e.g., as many as 20 for [[13](#bib.bib97 "Social gan: socially acceptable trajectories with generative adversarial networks"), [33](#bib.bib110 "Sophie: an attentive gan for predicting paths compliant to social and physical constraints")]), which may be impractical for time-critical applications. The second category of approaches directly predicts a fixed number of trajectories along with their probabilities in a single-shot manner [[4](#bib.bib118 "Multipath: multiple probabilistic anchor trajectory hypotheses for behavior prediction"), [10](#bib.bib27 "Multimodal trajectory predictions for autonomous driving using deep convolutional networks"), [31](#bib.bib119 "CoverNet: multimodal behavior prediction using trajectory sets"), [9](#bib.bib28 "Deep kinematic models for physically realistic prediction of vehicle trajectories")]. The trajectories and probabilities are jointly trained with a combination of regression and classification losses, and are much more efficient than the alternatives. As a result, most applied work follows the one-shot approach [[4](#bib.bib118 "Multipath: multiple probabilistic anchor trajectory hypotheses for behavior prediction"), [31](#bib.bib119 "CoverNet: multimodal behavior prediction using trajectory sets")]. In addition to multimodality, it is important to capture uncertainty of actor motion within a trajectory mode. This can be achieved by explicitly modeling each trajectory as a probability distribution, for example by modelling trajectory waypoints using Gaussians [[4](#bib.bib118 "Multipath: multiple probabilistic anchor trajectory hypotheses for behavior prediction"), [11](#bib.bib26 "Uncertainty-aware short-term motion prediction of traffic actors for autonomous driving"), [15](#bib.bib120 "The trajectron: probabilistic multi-agent trajectory modeling with dynamic spatiotemporal graphs"), [34](#bib.bib121 "Trajectron++: multi-agent generative trajectory forecasting with heterogeneous data for control"), [14](#bib.bib127 "Rules of the road: predicting driving behavior with a convolutional model of semantic interactions")]. Following a different paradigm, some researchers have proposed non-parametric approaches [[16](#bib.bib131 "Discrete residual flow for probabilistic pedestrian behavior prediction")] to directly predict an occupancy map. While parametric approaches can easily be cast into cell occupancy space the reverse is not necessarily true, limiting the applicability of such output representations in downstream modules of the SDV system. Instead of using independent detection and motion forecasting models, some recent work has proposed to train them jointly in an end-to-end fashion, taking raw sensor data as inputs. This approach was pioneered in the FaF model [[23](#bib.bib17 "Fast and furious: real time end-to-end 3d detection, tracking and motion forecasting with a single convolutional net")], while IntentNet [[3](#bib.bib22 "Intentnet: learning to predict intention from raw sensor data")] further included map data as an input and proposed to predict both actor trajectories and their high-level intents. The authors of [[39](#bib.bib129 "End-to-end interpretable neural motion planner")] further extended this idea to an end-to-end model that also includes motion planning. SpAGNN [[2](#bib.bib130 "Spatially-aware graph neural networks for relational behavior forecasting from sensor data")] introduced a two-stage model with Rotated Region of Interest (RROI) cropping, a graph neural network module to encode actor relations, as well as modeling the uncertainty of future trajectories. MotionNet [[37](#bib.bib122 "MotionNet: joint perception and motion prediction for autonomous driving based on bird’s eye view maps")] used a spatial-temporal pyramid network to jointly perform detection and motion prediction for each BEV grid cell. LaserFlow [[28](#bib.bib132 "LaserFlow: efficient and probabilistic object detection and motion forecasting")] proposed an end-to-end model using multi-frame RV lidar inputs, unlike the other methods which use BEV representations, which can also perform prediction on multiple actor types. Compared to our method, most of the above end-to-end methods do not consider motion prediction on diverse road actor types, and none of them addresses the multimodal nature of possible future trajectories. The earlier work has clearly shown the promise of end-to-end approaches, with researchers looking at various aspects to improve the prediction performance. In this paper we propose the first model to bring these key ideas together, and show in the experimental section the benefits over the baselines. ![Overview of the MultiXNet architecture, where the first-stage network corresponding to IntentNet ](https://media.arxiv-vanity.com/render-output/8049776/x1.png) Fig. 2: Overview of the MultiXNet architecture, where the first-stage network corresponding to IntentNet [[3](#bib.bib22 "Intentnet: learning to predict intention from raw sensor data")] outputs actor detections and their unimodal motion prediction, while the second stage refines predictions to be multimodal and uncertainty-aware; note that the first-stage prediction of the right-turning vehicle is incorrect, and the second stage improves its prediction Iii Proposed Approach ---------------------- In this section we describe our end-to-end method for joint detection and prediction, called MultiXNet. We first describe the existing state-of-the-art IntentNet architecture [[3](#bib.bib22 "Intentnet: learning to predict intention from raw sensor data")], followed by a discussion of our proposed improvements. ### Iii-a Baseline end-to-end detection and prediction #### Iii-A1 Input representation In Fig. [2](#S2.F2 "Fig. 2 ‣ II RELATED WORK ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction") we show the lidar and map inputs to the network. We assume a lidar sensor installed on the SDV that provides measurements at regular time intervals. At time t a lidar sweep St comprises a set of 3D lidar returns represented by their (x,y,z) locations. Following [[3](#bib.bib22 "Intentnet: learning to predict intention from raw sensor data")], we encode the lidar data St in a BEV image centered on the SDV, with voxel sizes of ΔL and ΔW along the forward x- and left y-axes, respectively, and ΔV along the vertical axis (representing the image channels). Each voxel encodes binary information about whether or not there exists at least one lidar return inside that voxel. In addition, to capture temporal information we encode the T−1 past lidar sweeps {St−T+1,…,St−1} into the same BEV frame using their known SDV poses, and stack them together along the channel (or vertical) dimension. Assuming we consider an area of length L, width W, and height V, this yields an image of size ⌈LΔL⌉×⌈WΔW⌉×T⌈VΔV⌉. Moreover, let us assume we have access to a high-definition map of the operating area around the SDV denoted by M. As shown in Fig. [2](#S2.F2 "Fig. 2 ‣ II RELATED WORK ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction"), we encode static map elements from M in the same BEV frame as introduced above. These include driving paths, crosswalks, lane and road boundaries, intersections, driveways, and parking lots, where each element is encoded as a binary mask in its own separate channel. This results in a total of seven map additional channels, which are processed by a few convolutional layers before being stacked with the processed lidar channels, to be used as a BEV input to the rest of the network, as described in [[3](#bib.bib22 "Intentnet: learning to predict intention from raw sensor data")]. #### Iii-A2 Network architecture and output The input BEV image can be viewed as a top-down grid representation of the SDV’s surroundings, with each grid cell comprising input features encoded along the channel dimensions. As in [[3](#bib.bib22 "Intentnet: learning to predict intention from raw sensor data")], this image is then processed by a sequence of 2-D convolutional layers to produce a final layer that contains learned features for each cell location. Following an additional 1×1 convolutional layer, for each cell we predict two sets of outputs, representing object detection and its movement prediction (in the following text we denote a predicted value by the hat-notation ^∗). In particular, the detection output for a cell centered at (x,y) comprises an existence probability ^p, oriented bounding box represented by its center ^c0=(^cx0,^cy0) relative to the center of the grid cell, size represented by length ^l and width ^w, and heading ^θ0 relative to the x-axis, parameterized as a tuple (sin^θ0,cos^θ0). In addition, the prediction output is composed of bounding box centers (or waypoints) ^ch=(^cxh,^cyh) and headings ^θh at H future time horizons, with h∈{1,…,H}. A full set of H waypoints is denoted as a trajectory ^τ={^ch,^θh}Hh=1, where the bounding box size is considered constant across the entire prediction horizon. #### Iii-A3 Loss As discussed in [[3](#bib.bib22 "Intentnet: learning to predict intention from raw sensor data")], the loss at a certain time step consists of detection and prediction losses computed over all BEV cells. When it comes to the per-pixel detection loss, a binary focal loss ℓfocal(^p)=(1−^p)γlog^p is used for the probability of a ground-truth object [[22](#bib.bib21 "Focal loss for dense object detection")], where we empirically found good performance with hyper-parameter γ set to 2. Moreover, when there exists a ground-truth object in a particular cell a smooth-ℓ1 regression loss ℓ1(^v−v) is used for all bounding box parameters (i.e., center, size, and heading), where the loss is computed between the predicted value ^v and the corresponding ground truth v. The smooth-ℓ1 regression loss is used to capture prediction errors of future bounding box centers and headings. We refer to a cell containing an object as a foreground (fg) cell, and a background (bg) cell otherwise. Then, the overall loss at horizon h for a foreground cell Lfg(h) is computed as | | | | | | --- | --- | --- | --- | | | Lfg(h)=1h=0(ℓfocal(^p)+ℓ1(^l−l)+ℓ1(^w−w))+ℓ1(^cxh−cxh)+ℓ1(^cyh−cyh)+ℓ1(sin^θh−sinθh)+ℓ1(cos^θh−cosθh), | | (1) | where 1c equals 1 if the condition c holds and 0 otherwise. Loss for a background cell equals Lbg=ℓfocal(1−^p). Lastly, to enforce a lower error tolerance for earlier horizons we multiply the per-horizon losses by fixed weights that are gradually decreasing for future timesteps, and the per-horizon losses are aggregated to obtain the final loss, | | | | | | --- | --- | --- | --- | | | L=1\footnotesize bg cellLbg+1% \footnotesize fg cellH∑h=0λhLfg(h), | | (2) | where λ∈(0,1) is a constant decay factor (set to 0.97 in our experiments). The loss contains both detection and prediction components, and all model parameters are learned jointly in an end-to-end manner. ### Iii-B Improving end-to-end motion prediction In this section we present an end-to-end method that improves over the current state-of-the-art. We build on the approach presented in the previous section, extending it to significantly improve its motion prediction performance. #### Iii-B1 Uncertainty-aware loss In addition to predicting trajectories, an important task in autonomous driving is the estimation of their spatial uncertainty. This is useful for fusion of results from multiple predictors, and is also consumed by a motion planner to improve SDV safety. In earlier work [[11](#bib.bib26 "Uncertainty-aware short-term motion prediction of traffic actors for autonomous driving")] it was proposed as a fine-tuning step following training of a model that only considered trajectory waypoints without uncertainties. Then, by freezing the main prediction weights or setting a low learning rate, the uncertainty module was trained without hurting the overall prediction performance. In this paper we describe a method that learns trajectories and uncertainties jointly, where we decompose the position uncertainty in the along-track (AT) and cross-track (CT) directions [[12](#bib.bib10 "A methodology for automated trajectory prediction analysis")]. In particular, a predicted waypoint ^ch is projected along AT and CT directions by considering the ground-truth heading θh, and the errors along these directions are assumed to follow a Laplace distribution Laplace(μ,b), with a PDF of a random Laplacian variable v computed as | | | | | | --- | --- | --- | --- | | | 12bexp(−|v−μ|b), | | (3) | where mean μ and diversity b are the Laplace parameters. We assume that AT and CT errors are independent, with each having a separate set of Laplace parameters. Taking AT as an example and assuming an error value ^eAT, this defines a Laplace distribution Laplace(^eAT,^bAT). Then, we minimize the loss by minimizing the Kullback–Leibler (KL) divergence between the ground-truth Laplace(0,bAT) and the predicted Laplace(^eAT,^bAT), computed as follows [[29](#bib.bib13 "An alternative probabilistic interpretation of the huber loss")], | | | | | | | --- | --- | --- | --- | --- | | | KLAT | =log^bATbAT+bATexp(−|^eAT|bAT)+|^eAT|^bAT−1. | | (4) | Similarly, KLCT can be computed for the CT errors, and we then use KLAT and KLCT instead of the smooth-ℓ1 loss for bounding box centers introduced in the previous section. An important question is the choice of ground-truth diversities bAT and bCT. In earlier detection work [[26](#bib.bib133 "Learning an uncertainty-aware object detector for autonomous driving")] a percentage of label area covered by lidar points was used, however this may not be the best choice for the prediction task as the prediction difficulty and uncertainty is expected to grow with longer horizons. To account for this, we linearly increase the ground-truth diversity with time, | | | | | | | --- | --- | --- | --- | --- | | | b∗(t) | =α∗+β∗t, | | (5) | where parameters α∗ and β∗ are empirically determined, with separate parameters for AT and for CT components. This is achieved by training models with varying α∗ and β∗ parameters and choosing the parameter set for which the reliability diagrams [[11](#bib.bib26 "Uncertainty-aware short-term motion prediction of traffic actors for autonomous driving")] indicate that the model outputs are the most calibrated, discussed in Sec. [IV-B](#S4.SS2 "IV-B Results ‣ IV EXPERIMENTS ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction"). #### Iii-B2 Second-stage trajectory refinement As shown in Fig. [2](#S2.F2 "Fig. 2 ‣ II RELATED WORK ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction"), following the detection and prediction inference described in Sec. [III-A](#S3.SS1 "III-A Baseline end-to-end detection and prediction ‣ III PROPOSED APPROACH ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction") we perform further refinement of the motion predictions for the detected objects. The refinement network, which we refer to as the second stage of the model, discards the first-stage trajectory predictions and takes the inferred object center ^c0 and heading ^θ0, as well as the final feature layer from the main network. Then, it crops and rotates learned features for each actor, such that the actor is oriented pointing up in the rotated image [[7](#bib.bib39 "Predicting motion of vulnerable road users using high-definition maps and efficient convnets"), [11](#bib.bib26 "Uncertainty-aware short-term motion prediction of traffic actors for autonomous driving"), [21](#bib.bib38 "Multi-task multi-sensor fusion for 3d object detection")] as illustrated in Fig.  [2](#S2.F2 "Fig. 2 ‣ II RELATED WORK ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction"). The RROI feature map is then fed through a lightweight CNN network before the final prediction of future trajectory and uncertainty is performed. Both first- and second-stage networks are trained jointly, using the full loss L in the first stage and only the future prediction loss in the second stage, where the second-stage predictions are used as the final output trajectories. The proposed method has several advantages. First, the output representation can be standardized in the actor frame. In the first-stage model the output trajectories can radiate in any direction from the actor position, while in the actor frame the majority of the future trajectories grow from the origin forward. In addition, the second stage network can concentrate on extracting features for a single actor of interest and discard irrelevant information. It is important to clarify that the purpose of a two-stage approach is different from that in Faster R-CNN [[32](#bib.bib37 "Faster r-cnn: towards real-time object detection with region proposal networks")], where it was used to refine and classify region proposals. Instead, in our work the second stage is used to refine the trajectories and not the detections. #### Iii-B3 Multimodal trajectory prediction Traffic behavior is inherently multimodal, as traffic actors at any point may make one of several movement decisions. Modeling such behavior is an important task in the self-driving field, with several interesting ideas being proposed in the literature [[4](#bib.bib118 "Multipath: multiple probabilistic anchor trajectory hypotheses for behavior prediction"), [10](#bib.bib27 "Multimodal trajectory predictions for autonomous driving using deep convolutional networks"), [31](#bib.bib119 "CoverNet: multimodal behavior prediction using trajectory sets"), [9](#bib.bib28 "Deep kinematic models for physically realistic prediction of vehicle trajectories")]. In this paper we address this problem, and describe an approach to output a fixed number of trajectories for each detected actor along with their probabilities. In particular, instead of outputting a single predicted trajectory in the second stage for each detected actor, the model outputs a fixed number of M trajectories. Let us denote trajectory modes output by the model as {^τm}Mm=1, and their probabilities {^pm}Mm=1. First, we identify one of the M modes as the *ground-truth mode* mgt, for which purpose we designed a novel direction-based policy to decide the ground-truth mode. More specifically, we compute an angle Δθ=θH−θ0 between the last and the current ground-truth heading, where Δθ∈(−π,π]. Then, we divide the range (−π,π] into M bins and decide mgt based on where Δθ falls. In this way, during training each mode is specialized to be responsible for a distinct behavior (e.g., for M=3 we have left-turning, right-turning, and going-straight modes). Given the predictions and the ground-truth trajectory, and using a similar approach as discussed in [[10](#bib.bib27 "Multimodal trajectory predictions for autonomous driving using deep convolutional networks")], the multimodal trajectory loss consists of a trajectory loss of the mgt-th trajectory mode as described in Sec. [III-B1](#S3.SS2.SSS1 "III-B1 Uncertainty-aware loss ‣ III-B Improving end-to-end motion prediction ‣ III PROPOSED APPROACH ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction") and a cross-entropy loss for the trajectory probabilities. Lastly, we continue to use unimodal prediction loss in the first stage to improve the model training, and the multimodal trajectory loss is only applied to train the second-stage network. #### Iii-B4 Handling multiple actor types Unlike earlier work [[3](#bib.bib22 "Intentnet: learning to predict intention from raw sensor data")] that mostly focused on a single traffic actor type, we model behavior of multiple actor types simultaneously, focusing on vehicles, pedestrians, and bicyclists. This is done by separating three sets of outputs, one for each type, after the backbone network computes the shared BEV learned features shown in Fig. [2](#S2.F2 "Fig. 2 ‣ II RELATED WORK ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction"). Handling all actors using a single model and in a single pass simplifies the SDV system significantly, and helps ensure safe and effective operations. It is important to emphasize that in the case of pedestrians and bicyclists we found that a unimodal output results in the best performance, and we do not use the multimodal loss nor the refinement stage for these traffic actors. Thus, in our experiments we set M=3 for vehicles and M=1 for the other actor types. Then, the final loss of the model is the sum of per-type losses, with each per-type loss comprising the detection loss as described in Sec. [III-A3](#S3.SS1.SSS3 "III-A3 Loss ‣ III-A Baseline end-to-end detection and prediction ‣ III PROPOSED APPROACH ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction"), as well as the uncertainty-aware trajectory loss described in Sec. [III-B2](#S3.SS2.SSS2 "III-B2 Second-stage trajectory refinement ‣ III-B Improving end-to-end motion prediction ‣ III PROPOSED APPROACH ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction") and Sec. [III-B3](#S3.SS2.SSS3 "III-B3 Multimodal trajectory prediction ‣ III-B Improving end-to-end motion prediction ‣ III PROPOSED APPROACH ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction"). Iv Experiments --------------- ### Iv-a Experimental settings Following earlier work [[2](#bib.bib130 "Spatially-aware graph neural networks for relational behavior forecasting from sensor data"), [25](#bib.bib114 "Lasernet: an efficient probabilistic 3d object detector for autonomous driving")] we evaluated the proposed approach using the ATG4D data set. The data was collected by a fleet of SDVs across several cities in North America using a 64-beam, roof-mounted lidar sensor. It contains over 1 million frames collected from 5,500 different scenarios, each scenario being a sequence of 250 frames captured at 10Hz. The labels are precise tracks of 3D bounding boxes at a maximum range of 100 meters from the data-collecting vehicle. Vehicles are the most common actor type in the data set, with 3.2x fewer pedestrians and 15x fewer bicyclists. We set the parameters of the BEV image to L=150m, W=100m, V=3.2m, ΔL=0.16m, ΔW=0.16m, ΔV=0.2m, and use T=10 sweeps to predict H=30 future states at 10Hz (resulting in predictions that are 3s long). For the second stage, we cropped a 40m×40m region around each actor. The models were implemented in PyTorch [[30](#bib.bib14 "PyTorch: an imperative style, high-performance deep learning library")] and trained end-to-end with 16 GPUs, a per-GPU batch size of 2, Adam optimizer [[17](#bib.bib104 "Adam: a method for stochastic optimization")], and an initial learning rate of 2e-4, training for 2 epochs completing in a day. Note that early in training the first-stage detection output is too noisy to provide stable inputs for the second-stage refinement. To mitigate this issue we used the ground-truth detections when training the second-stage network for the first 2.5k iterations. We compared the discussed approaches to our implementation of IntentNet [[3](#bib.bib22 "Intentnet: learning to predict intention from raw sensor data")] which we extended to support multiple classes and tuned to obtain better results than reported in the original paper. In addition, using the published results we compared to the recently proposed end-to-end SpAGNN method that takes into account interactions between the traffic actors [[2](#bib.bib130 "Spatially-aware graph neural networks for relational behavior forecasting from sensor data")]. We evaluated the methods using both detection and prediction metrics. Following earlier literature for detection metrics, we set the IoU detection matching threshold to 0.7, 0.1, 0.3 for vehicles, pedestrians, and bicyclists, respectively. For prediction metrics we set the probability threshold to obtain a recall of 0.8 as the operational point, as in [[2](#bib.bib130 "Spatially-aware graph neural networks for relational behavior forecasting from sensor data")]. In particular, we report average precision (AP) detection metric, as well as displacement error (DE) [[1](#bib.bib106 "Social lstm: human trajectory prediction in crowded spaces")] and cross-track (CT) prediction error at 3 seconds. For the multimodal approaches we report both the min-over-M metrics [[10](#bib.bib27 "Multimodal trajectory predictions for autonomous driving using deep convolutional networks"), [20](#bib.bib107 "Desire: distant future prediction in dynamic scenes with interacting agents")] taking the minimal error over all modes (measuring recall) and the performance of the highest-probability mode (measuring precision). | | Vehicles | Pedestrians | Bicyclists | | --- | --- | --- | --- | | Method | AP | DE | CT | AP | DE | CT | AP | DE | CT | | SpAGNN | 83.9 | 96.0 | - | - | - | - | - | - | - | | IntentNet (DE) | 84.0 | 90.5 | 26.3 | 88.2 | 61.9 | 32.6 | 83.8 | 53.0 | 23.7 | | IntentNet (AT/CT) | 83.9 | 90.4 | 26.0 | 88.4 | 61.8 | 32.9 | 83.2 | 51.7 | 23.5 | | MultiXNet | 84.2 | 83.1 (82.1) | 20.2 (19.8) | 88.4 | 57.2 | 30.5 | 84.6 | 48.5 | 20.7 | TABLE I: Comparison of approaches using the highest-probability mode, with detection performance evaluated using average precision in % (AP) and prediction using final displacement error (DE) and cross-track error (CT) at 3s in centimeters; results computed on the best-matching mode (i.e., min-over-M) for multimodal methods shown in parentheses where available | | | | | | | | --- | --- | --- | --- | --- | --- | | | | | Vehicles | Pedestrians | Bicyclists | | Unc. | 2nd | Mm. | AP | DE | CT | AP | DE | CT | AP | DE | CT | | | | | 83.9 | 90.4 | 26.0 | 88.4 | 61.8 | 32.9 | 83.2 | 51.7 | 23.5 | | ✓ | | | 84.1 | 91.9 | 22.8 | 88.2 | 57.1 | 30.4 | 84.6 | 49.9 | 21.1 | | | ✓ | | 84.6 | 82.2 | 22.2 | 88.7 | 63.2 | 33.2 | 84.3 | 51.6 | 23.8 | | ✓ | ✓ | | 84.4 | 83.3 | 20.4 | 88.4 | 57.6 | 30.6 | 83.9 | 52.0 | 21.7 | | | ✓ | ✓ | 84.0 | 82.4 (81.4) | 22.4 (21.8) | 88.5 | 62.6 | 33.0 | 84.2 | 51.2 | 23.7 | | ✓ | ✓ | ✓ | 84.2 | 83.1 (82.1) | 20.2 (19.8) | 88.4 | 57.2 | 30.5 | 84.6 | 48.5 | 20.7 | TABLE II: Ablation study of the proposed MultiXNet; “Unc.” denotes uncertainty loss from Sec. [III-B1](#S3.SS2.SSS1 "III-B1 Uncertainty-aware loss ‣ III-B Improving end-to-end motion prediction ‣ III PROPOSED APPROACH ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction"), “2nd” denotes the refinement stage from Sec. [III-B2](#S3.SS2.SSS2 "III-B2 Second-stage trajectory refinement ‣ III-B Improving end-to-end motion prediction ‣ III PROPOSED APPROACH ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction"), and “Mm.” denotes the multimodal loss from Sec. [III-B3](#S3.SS2.SSS3 "III-B3 Multimodal trajectory prediction ‣ III-B Improving end-to-end motion prediction ‣ III PROPOSED APPROACH ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction") ### Iv-B Results In this section we present the quantitative results of the competing methods. The evaluation results for vehicles, pedestrians, and bicyclists are summarized in Table [I](#S4.T1 "TABLE I ‣ IV-A Experimental settings ‣ IV EXPERIMENTS ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction") with best prediction results shown in bold, where we compare the proposed MultiXNet model to the state-of-the-art methods SpAGNN [[2](#bib.bib130 "Spatially-aware graph neural networks for relational behavior forecasting from sensor data")] and IntentNet [[3](#bib.bib22 "Intentnet: learning to predict intention from raw sensor data")]. Note that, in addition to the baseline IntentNet that uses displacement error (DE) in its loss, we also include a version of IntentNet with equally-weighted AT and CT losses instead. This is an extension of the baseline that uses the idea presented in Sec. [III-B1](#S3.SS2.SSS1 "III-B1 Uncertainty-aware loss ‣ III-B Improving end-to-end motion prediction ‣ III PROPOSED APPROACH ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction"), which was shown to perform well in our experiments. We can see that all methods achieved similar detection performance across the board. Comparing the state-of-the-art methods SpAGNN and IntentNet, the latter obtained better prediction accuracy on vehicle actors. The authors of SpAGNN did not provide results on other traffic actors so these results are not included in the table. Moreover, we see that IntentNet with AT/CT losses, corresponding to the model described in Sec. [III-A](#S3.SS1 "III-A Baseline end-to-end detection and prediction ‣ III PROPOSED APPROACH ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction") that does not model the uncertainty, achieved comparable DE and CT errors as the original IntentNet with DE loss, with slightly improved results for vehicles and bicyclists. While the improvements are not large, this model allows for different weighting of AT and CT error components. This trade-off is an important feature for deployed models in autonomous driving, where prediction accuracies along these two directions may have different importance for the SDV (e.g., in merging scenarios AT may be more important, while when passing we may care more about CT). Lastly, the proposed MultiXNet outperformed the competing methods by a significant margin on all three actor types. Taking only vehicles into account, we see that modeling multimodal trajectories led to improvements when considering the min-over-M mode (result given in parentheses), as well as the highest-probability mode, indicating both better recall and better precision of MultiXNet, respectively. In Table [II](#S4.T2 "TABLE II ‣ IV-A Experimental settings ‣ IV EXPERIMENTS ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction") we present results of an ablation study of the MultiXNet improvements, involving the components discussed in Sec. [III-B](#S3.SS2 "III-B Improving end-to-end motion prediction ‣ III PROPOSED APPROACH ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction"). Note that the first row corresponds to the IntentNet (AT/CT) method from the Table [I](#S4.T1 "TABLE I ‣ IV-A Experimental settings ‣ IV EXPERIMENTS ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction"), while the last row corresponds to MultiXNet. We can see that all methods had nearly the same AP, which is not a surprising result since all approaches have identical detection architectures. Focusing on the vehicle actors for a moment, we see that modeling uncertainty led to improvements in the CT error, which decreased by 13%. Introducing the actor refinement using the second-stage network resulted in the largest improvement in the DE, leading to a drop of 11%. Note that such large improvements in DE and CT may translate to significant improvements in the SDV performance. The last three rows give performance of different variants of the second-stage model. Similarly to the result given previously, modeling for uncertainty led to substantial improvement of nearly 10% when it comes to the CT error. This can be explained by the fact that outliers are downweighted due to their larger variance as shown in equation ([4](#S3.E4 "(4) ‣ III-B1 Uncertainty-aware loss ‣ III-B Improving end-to-end motion prediction ‣ III PROPOSED APPROACH ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction")), and thus have less impact during training as compared to the case when the variance is not taken into account. | | | | --- | --- | | Reliability diagrams for along-track (AT) (left) and cross-track (CT) (right) dimensions at | Reliability diagrams for along-track (AT) (left) and cross-track (CT) (right) dimensions at | Fig. 3: Reliability diagrams for along-track (AT) (left) and cross-track (CT) (right) dimensions at 3s prediction horizon | | | | | | | | --- | --- | --- | --- | --- | --- | | Qualitative results of the competing models, top row: IntentNet, bottom row: MultiXNet; ground truth shown in red, predictions shown in blue, while colored ellipses indicate one standard deviation of inferred uncertainty for future predictions | Qualitative results of the competing models, top row: IntentNet, bottom row: MultiXNet; ground truth shown in red, predictions shown in blue, while colored ellipses indicate one standard deviation of inferred uncertainty for future predictions | Qualitative results of the competing models, top row: IntentNet, bottom row: MultiXNet; ground truth shown in red, predictions shown in blue, while colored ellipses indicate one standard deviation of inferred uncertainty for future predictions | Qualitative results of the competing models, top row: IntentNet, bottom row: MultiXNet; ground truth shown in red, predictions shown in blue, while colored ellipses indicate one standard deviation of inferred uncertainty for future predictions | Qualitative results of the competing models, top row: IntentNet, bottom row: MultiXNet; ground truth shown in red, predictions shown in blue, while colored ellipses indicate one standard deviation of inferred uncertainty for future predictions | Qualitative results of the competing models, top row: IntentNet, bottom row: MultiXNet; ground truth shown in red, predictions shown in blue, while colored ellipses indicate one standard deviation of inferred uncertainty for future predictions | Fig. 4: Qualitative results of the competing models, top row: IntentNet, bottom row: MultiXNet; ground truth shown in red, predictions shown in blue, while colored ellipses indicate one standard deviation of inferred uncertainty for future predictions Lastly, in the last two rows we evaluated the models that output multimodal trajectories. We see that using the highest-probability mode to measure performance gave comparable results to a unimodal alternative. This is due to a known limitation of such an evaluation scheme, which can not adequately capture the performance of multimodal approaches [[10](#bib.bib27 "Multimodal trajectory predictions for autonomous driving using deep convolutional networks"), [20](#bib.bib107 "Desire: distant future prediction in dynamic scenes with interacting agents")]. For this reason we also report min-over-M shown in the parentheses, a commonly used multimodal evaluation technique in the literature, which indicated improvements in both DE and CT compared to the other baselines. Let us discuss the results on pedestrians and bicyclists shown in the remainder of Table [II](#S4.T2 "TABLE II ‣ IV-A Experimental settings ‣ IV EXPERIMENTS ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction"). As explained in Sec. [III-B4](#S3.SS2.SSS4 "III-B4 Handling multiple actor types ‣ III-B Improving end-to-end motion prediction ‣ III PROPOSED APPROACH ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction"), we did not use the second-stage refinement nor the multimodal loss for these actors, and the changes indicated in the 2nd and Mm. columns only affected the vehicle branch of the network (results for the same setup changed slightly due to random weight initialization). Similar to the experiments with vehicles we see that modeling uncertainty led to improved results, with CT improvements between 9% and 13%, as seen in the second, fourth, and sixth rows. In addition to improved performance, modeling uncertainty also allows reasoning about the inherent noise of future traffic movement. As mentioned in Sec. [I](#S1 "I INTRODUCTION ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction"), this is an important feature to better allow downstream motion planning components to generate safe and efficient SDV motions. In Fig. [3](#S4.F3 "Fig. 3 ‣ IV-B Results ‣ IV EXPERIMENTS ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction") we provide reliability diagrams [[11](#bib.bib26 "Uncertainty-aware short-term motion prediction of traffic actors for autonomous driving")] along the AT and CT dimensions for all three actor types, measured at the prediction horizon of 3 seconds. We can see that the learned uncertainties were well calibrated, with slight under-confidence for all traffic actors. Bicyclist uncertainties were the least calibrated, followed by pedestrians. As expected, we observed that the actor types with the most training data showed the best calibrated uncertainties. ### Iv-C Qualitative results In this section we present several representative case studies, exemplifying the benefits of the proposed MultiXNet over the state-of-the-art IntentNet. Three comparisons of the two methods are shown in Fig. [4](#S4.F4 "Fig. 4 ‣ IV-B Results ‣ IV EXPERIMENTS ‣ MultiXNet: Multiclass Multistage Multimodal Motion Prediction"), where we do not visualize low-probability MultiXNet trajectories below 0.3 threshold. In the first case we see an actor approaching an intersection and making a right-hand turn, where unimodal IntentNet incorrectly predicted that they will continue moving straight through the intersection. On the other hand, MultiXNet predicted a very accurate turning trajectory with high certainty, while also allowing for the possibility of going-straight behavior. Apart from the predictions, we can see that both models detected the two actors in the SDV’s surroundings with high accuracy. In the second case, the SDV is moving through an intersection with a green traffic light, surrounded by vehicles. We can see that both models correctly detected and predicted the movements of the majority of the traffic actors. Let us consider motion prediction for a large truck in a right-turn lane on the SDV’s right-hand side. Again, IntentNet predicted a straight trajectory while in actuality the actor made a turn. As before, MultiXNet generated multiple modes and provided reasonable uncertainty estimates for both the turning and the going-straight trajectories. Lastly, the third case shows the SDV in an uncommon three-way intersection. As previously, both models provided accurate detections of the surrounding actors, including one pedestrian in the top of the scene. Let us direct our attention to the actor approaching the intersection from the upper part of the figure. This actor made an unprotected left turn towards the SDV, which IntentNet mispredicted. Conversely, we see that MultiXNet produced both possible modes, including a turning trajectory with large uncertainty due to the unusual shape of the intersection. V Conclusion ------------- In this work we focused on the critical tasks of object detection and motion prediction for a self-driving system, and described an end-to-end model that addresses both tasks within a single framework. Existing state-of-the-art models are suboptimal as they do not reason about the uncertainty of future behavior, nor the multimodality of the future movement of traffic actors. To address these disadvantages we introduced MultiXNet, a multistage model that first infers object detections and predictions, and then refines these predictions using a second stage to output multiple potential future trajectories. In addition, the model estimates cross- and along-track movement uncertainties, which are critical for ensuring safety in downstream modules of the SDV system. The proposed method was evaluated on a large-scale data collected on the streets of several US cities, where it outperformed the existing state-of-the-art. The results strongly suggest the practical benefits of the proposed architecture.
e740e754-3d18-4c03-8a2b-3ce64c1ada24
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence **Abstract**: As there are no currently obvious ways to create safe self-improving superintelligence, but its emergence is looming, we probably need temporary ways to prevent its creation. The only way to prevent it is to create a special type of AI that is able to control and monitor the entire world. The idea has been suggested by Goertzel in the form of an AI Nanny, but his Nanny is still superintelligent, and is not easy to control. We explore here ways to create the safest and simplest form of AI which may work as an AI Nanny, that is, a global surveillance state powered by a Narrow AI, or AI Police. A similar but more limited system has already been implemented in China for the prevention of ordinary crime. AI police will be able to predict the actions of and stop potential terrorists and bad actors in advance. Implementation of such AI police will probably consist of two steps: first, a strategic decisive advantage via Narrow AI created by an intelligence services of a nuclear superpower, and then ubiquitous control over potentially dangerous agents which could create unauthorized artificial general intelligence which could evolve into Superintelligence. **Keywords**: AI – existential risks – surveillance – world government – NSA **Highlights**: · Narrow AI may be used to achieve a decisive strategic advantage (DSA) and acquire global power. · The most probable route to DSA via Narrow AI is the creation of Narrow AI by the secret service of a nuclear superpower. · The most probable places for its creation are the US National Security Agency or the Chinese Government. · Narrow AI may be used to create a Global AI Police for global surveillance, able to prevent the creation of dangerous AIs and most other existential risks. · This solution is dangerous but realistic. Pemalink: https://philpapers.org/rec/TURNAN-3 **Content** 1. Introduction 2. The main contradiction of the AI safety problem: AI must simultaneously exist and not exist 3. Decisive strategic advantage via Narrow AI 3.1. Non-self-improving AI can obtain a decisive advantage 3.2. Narrow AI is used to create non-AI world-dominating technology 3.3. Types of Narrow AI which may be used for obtaining a DSA 3.4. The knowability of a decisive advantage 4. AI-empowered reconnaissance organization of a nuclear superpower is the most probable place of origin of a Narrow AI DSA 4.1. Advantages of a secret Narrow AI program inside the government 4.2 Existing governmental and intelligence Narrow AI projects according to open sources 4.3. Who is winning the Narrow AI race? 5. Plan of implementation of AI police via Narrow AI advantage 5.1. Steps of implementing of AI safety via Narrow AI DSA 5.2. Predictive AI Police based on Narrow AI: what and how to control 6. Obstacles and dangers 6.1. Catastrophic risks 6.2. Mafia-state, corruption, and the use of the governmental AI by private individuals Conclusion. Riding the wave of the AI revolution to a safer world 1. Introduction =============== This article is pessimistic. It assumes that there is no way to create safe, benevolent self-improving superintelligence, and that the only way to escape its creation is the implementation of some form of limited AI, which will work as a Global AI Nanny, controlling and preventing the appearance of dangerous AIs as well as other global risks. The idea of AI Nanny was first suggested by Goertzel (Goertzel, 2012); we have previously explored its levels of realization (Turchin & Denkenberger, 2017a). An AI Nanny does not itself need to be a superintelligence, as if it is, all the same control problems will appear again (Muehlhauser & Salamon, 2012). In this article, we will explore ways to create a non-superintelligent AI Nanny via Narrow AI. Doing so involves addressing two questions: First, how to achieve a decisive strategic advantage (DSA) via Narrow AI, and second, how to use such a system to achieve a level of effective global control sufficient to prevent the creation of superintelligent AI. In the sister article, we look at the next level of AI Nanny, based on human uploads, which currently seems a more remote possibility, but which may become possible after implementation of a Narrow AI Nanny (Turchin, 2017). The idea of achieving strategic advantage via AI before the creation of the superintelligence was suggested by Sotala (Sotala, 2018), who called it a “Major strategic advantage” as opposed to a “Decisive strategic advantage”, which is overwhelmingly stronger, but requires superintelligence. A similar line of thought was presented by Alex Mennen (Mennen, 2017). Historically, there are several examples where an advantage in Narrow AI has been important. The most famous is the case is breaking of German cipher *Enigma* via electro-mechanical “cryptographic bombe” constructed by Alan Turing, which automatically generate and tested hypothesis about code (Welchman, 1982). It was an overwhelmingly more complex computing system than any other during WW2, which gave the Allies informational domination over the Axis powers. A more recent, but also more elusive, example is the case of Cambridge Analytica, which supposedly used its data-crunching advantage to contribute to the result of the 2016 US presidential elections (Cottrell, 2018). Another example is the use of sophisticated cyberweapons like Stuxnet to disarm an enemy (Kushner, 2013). The Chinese government’s facial recognition and human ranking system is a possible example not of a Narrow AI advantage, but of “global AI police”, which create informational dominance over all independent agents; however, any totalitarian power which worth its name had effective instruments for such informational domination even before computers, like Stasi in the former East Germany. To solve AI safety we will apply the theory of complex problem solving created by Altshuller (1999) in Section 2; discuss ways to reach a decisive advantage via Narrow AI in section 3; and, in section 4, examine ways to use Narrow AI to effectively monitor and prevent creation of unauthorized self-improving AI. In section 5 we will look at ways to safely develop AI Police based on an advantage in Narrow AI, and in section 6 we will examine potential failure modes. 2. The main contradiction of the AI safety problem: AI must simultaneously exist and not exist ============================================================================================== It is becoming widely accepted that sufficiently advanced AI may be global catastrophic risk, especially if it becomes superintelligent in the process of recursive self-improvement (Bostrom, 2014; Yudkowsky, 2008). It has also been suggested that we should apply engineering standards of safety to the creation of AI (Yampolsky & Fox, 2013). Engineering safety demands that the creation of the unpredictably explosive system whose safety cannot be proved (Yampolskiy, 2016) or incrementally tested should be prevented. For instance, no one wants a nuclear reactor with unpredictable chain reaction; even in a nuclear bomb, the chain reaction should be predictable. Hence, if to really apply engineering safety to the AI, there is only one way to do it: *Do not create artificial general intelligence (AGI).* However, we can’t prevent creation of AGIs by other agents as there is no central global authority and ability to monitor all AI labs and individuals. In addition, the probability of global cooperation is small because of the ongoing AI arms race between US and China (Ding, 2018; Perez, 2017). Moreover, if we postpone the creation of AGI, we could succumb to other global catastrophic risk, like biological risks (Millett & Snyder-Beattie, 2017; Turchin, Green, & Denkenberger, 2017) as only AI-powered global control may be sufficient to effectively prevent them. We need powerful AI to prevent all other risks. In the words of problem solving method *TRIZ* (Altshuller, 1999), the core contradiction of the AI problem is following: *AGI must exist and non-exist simultaneously*. What does it mean for AI to “exist and non-exist simultaneously”? Several ways to limit the capabilities of AI so it can’t be regarded as “fully existing” have been suggested: 1) *No agency*. In this case, AI does not exist as an agent separate from humans, so there is no alignment problem. For example, AI as a human augmentation, as envisioned in Musk’s Neuralink (Templeton, 2017). 2) *No “artificial*” *component*. AI is not created *de novo*, but is somehow connected with humans, perhaps via human uploading (Hanson, 2016). We will look more at this case in another article, “Human upload as AI Nanny”. 3) *No “general intelligence*”. The problem-solving ability of this AI arises not from its wit, but because of its access to large amounts of data and other resources. It is also Narrow AI, not a universal AGI. This is the approach we will explore in the current article. 3. Decisive strategic advantage via Narrow AI ============================================= 3.1. Non-self-improving AI can obtain a decisive advantage ---------------------------------------------------------- Recently Sotala (2016), Christiano (2016), Mennen (2017), and Krakovna (2015) have explored the idea that AI may have a DSA even without the capacity for self-improvement. Mennen wrote about following conditions for the strategic advantage of non-self-improving AI: 1) *World-taking capability outperforming self-improving capabilities,* that is, “AIs are better at taking over the world than they are at programming AIs” (Mennen, 2017). He suggests later that, hypothetically, AI will be better than humans at some form of engineering. Sotala opined that, “for the AI to acquire a DSA, its level in some offensive capability must overcome humanity’s defensive capabilities” (Sotala, 2016). 2) *Self-restriction in self-improvement*. “An AI that is capable of producing a more capable AI may refrain from doing so if it is unable to solve the AI alignment problem for itself” (Mennen, 2017). We have previously discussed some potential difficulties for any self-improving AI (Turchin & Denkenberger, 2017b). Mennen suggests that AI’s advantage in that case will be less marked, so boxing may be more workable, and the AI is more likely to fail in its takeover attempt. 3) *Alignment of non-self-improving AI is simpler*. “AI alignment would be easier for AIs that do not undergo an intelligence explosion” (Mennen, 2017), as it will be a) easy to monitor its goals, b) less of a difference will be observed between our goals and the AI’s interpretation of them. This dichotomy was also explored by Maxwell (2017). 4) *AI must obtain a DSA not only over humans, but over other AIs*, as well as other nation-states. The need to have advantage over other AIs depends on the number and relative difference between AIs producing teams. We have looked at the nature of AI arms races in an earlier paper (Turchin & Denkenberger, 2017a). A smaller advantage will produce a slower ascension, and thus a multipolar outcome will be likely. Sotala added a distinction between the major strategic advantage provided by Narrow AI and that of DSA by superintelligent AI (Sotala, 2018). Most of what we will describe below falls in the first category. The smaller the advantage, the riskier and more uncertain its implementation, and the process of the implementation could be more violent. In the next subsections we will explore how Narrow AI may be used to obtain a DSA. 3.2. Narrow AI is used to create non-AI world-dominating technology ------------------------------------------------------------------- Narrow AI may be implemented in several ways to obtain a DSA, and for a real DSA, these implementations should be combined. However, any DSA will temporary, and may be in place for no more than one year. **Nuclear war-winning strategy**. Narrow AI systems could empower strategic planners with the ability to actually win a nuclear war with very little collateral damage or risk of global consequences. That is, they could calculate a route to a credible first strike capability. For example, if nuclear strategy could be successfully formalized, like the game Go, the country with the more powerful AI would win. There are several ways in which such nuclear superiority could win using AI: - *Strategic dominance*. Create a detailed world model which could then be played in the same way as a board game. This is most straightforward way, but it is less likely, as creation of a perfect model is unlikely without AGI and is difficult in the chaotic “real world”. - *Informational dominance*. The ability to learn much more information about the enemy, e.g. the location of all its nuclear weapons and the codes to disable them. Such informational dominance may be used to disarm the enemy forces; it may also include learning all state secrets of the enemy with guaranteed preservation of their own secrets. - *Identify small actions with large consequences.* This category includes actions such as blackmail of the enemy’s leaders and the use of cryptoweapons and false flags to corner the enemy. This approach will probably will work if combined with strategic dominance. - *Dominance in manufacturing*. New manufacturing technology enables cheaper and deadlier missiles and other military hardware like drones and large quantity of them. This especially apply to invisible weapons for first strike, like stealth cruise missiles. - *Deploy cyberweapons inside the enemy’s nuclear control chains.* Something like an advanced form of a computer virus embedded in the nuclear control and warning systems. Dominance in nuclear war does not necessarily mean that actual war will happen, but such dominance could be used to force the enemy to capitulate and agree to a certain type of inspections. However, a credible demonstration of the disarming capability may be needed to motivate compliance. **New technology which helps to produce other types of weapons.** - *Biological weapons.* Advances in computer empowered bioengineering could produce targeted bioweapons. It may be not worthwhile to list all possible hazards which an unethical agent could use in a quest for global domination if the agent has access to superior biotechnology with science-fiction-level capabilities. - *Nanotechnology*. Molecular manufacturing will allow the creation of new types of invisible self-replicating weapons, much more destructive then nukes. **Cyberweapons**, that is, weapons which consist of computer programs and mostly affect other programs. - Hidden switches in the enemy’s infrastructure. - The ability to sever communication inside an opposing military. - Full computerization of the army from the bottom to the top (De Spiegeleire, Maas, & Sweijs, 2017). - Large drone swarms, like the slaughterbots from a famous video (Oberhaus, 2017) or their manufacturing capabilities (Turchin & Denkenberger, 2018a). - Financial instruments. - Human-influencing capabilities (effective social manipulation like targeted adds and fake facts). 3.3. Types of Narrow AI which may be used for obtaining a DSA ------------------------------------------------------------- There are several hypothetical ways how narrow AI could reach DSA. One is **Data-driven AIs**: systems whose main power comes from access to the large amounts of data, which compensate for their limited or narrow “pure” intelligence. This includes subcategory of “*Big brothers”.* This category includes systems of criminal analysis like Palantir (recently mocked in the Senate as “Stanford Analytica” (Midler, 2018)), which unite mass surveillance with the ability to crunch big data and find patterns. Another type is *World simulations.* World simulations may be created based on data collected about the world and its people to predict their behavior. The possessor of the better model of the world would win. **Limited problem solvers** are systems which outperform humans within certain narrow fields which includes: - “*Robotic minds*” with limited agency and natural language processing capabilities able to empower a robotic army, for example, as the brain of a drone swarm . - *Cryptographic supremacy*. The case of Enigma shows the power of cryptographic supremacy over potential adversaries. Such supremacy might be enough to win WW3, as it will result in informational transparency for one side. Quantum computers could provide such supremacy via their ability to decipher codes (Preskill, 2012). - *Expert systems as Narrow Oracles*, which could provide useful advice in some field, perhaps based on some machine learning-based advice-generating software. - *Computer programs able to win strategic games.* Something like a strategic planner with playing abilities, e.g. Alpha Zero (Silver et al., 2017). Such a program may need either a hand-crafted world model or connection with the “world simulations” described in section 1.2. Such a system may be empowered by another system which able to formalize any real-world situation as a game. - *Narrow AI in engineering* could dramatically increase the effectiveness of some form of weapons construction, for example, nuclear or biological weapons, nanotechnology, or robotics. Narrow AI advantage may take also a form of **Narrow AI increasing the effectiveness of group intelligence**. *This could be Graphical collective thinking systems*, something like dynamic collectively edited roadmaps, wikis, or Palantir. One attempt to create such a platform was Arbital (Arbital, 2017). Christiano et al.’s “amplify and distill” project works on factored cognition, which will be a smartphone app which distributes different portions of cognitive tasks between teams (Ought, 2018). Also, it may take form of *AI-empowered personal search assistants*, maybe with a simple brain–computer interface or *Communication assistants,* which help to make conversation productive, record a conversation log and show relevant internet links. Finally, group intelligence may be aggregated via *large, self-improving organizations,* which implement all types of collective intelligence, hardware producing capabilities, money to hire the best talent, etc., like Google. Sotala has discussed “minds coalescence” as a way to create more powerful minds (Sotala & Valpola, 2012). Danila Medvedev suggested that the use of a powerful collaborative information processing system, something between a Wikipedia, Evernote, and Mindmap, may significantly increase group intelligence. Similar ideas have been discussed by “Neuronet” enthusiasts like Luksha, where collective intelligence will be produced via brain implants (Mitin, 2014). Superforecasting technology (Tetlock & Gardner, 2016) that aggregates predictions as well as prediction markets could be used to increase power of the “group brain”. In Soviet times this was known as “sharashka” (Kerber & Hardesty, 1996) – scientific lab consisted from imprisoned scientists, who were under government control and under pressure to make discoveries. **Narrow AI able to reach “informational dominance” over all potential enemies**: in this situation, the enemy can’t have any secrets and all its actions are constantly monitored. This could be achieved via: sophisticated spyware in all computers; quantum computers for code breaking or some exotic quantum tech like quantum radar or quantum calculations using close time like curves; microscopic robots, as small as a grain of salt, which could be secretly implanted in the adversary’s headquarters. 3.4. The knowability of a decisive advantage -------------------------------------------- Even if one side reaches the level of decisive advantage which provides it with the opportunity to take over the world, it may not realize what it possesses if it doesn’t know the capabilities of other players, which could be made deliberately vague. For example, in the 1940s, the US had nuclear superiority, but the Soviet Union made vague claims in 1947 that the nuclear secret was no longer secret (Timerbaev, n.d.), thus creating uncertainty about its level of nuclear success. To ensure a DSA, a rather invasive surveillance systems would need to be implemented first; in other words, the advantage must be reached first in informational domination, to guarantee knowledge of the capabilities of all opponents. This could be done via AI created inside an intelligence service. A DSA provided by Narrow AI will probably require a combination of several of the Narrow AI types listed in section 3.3, and the only way to guarantee such dominance is the actual size of the project. The size will depend on resource investments, first of all, money, but also minds, and strategic coordination of all these projects into one workable system. It looks like only the US and China currently have the resources and determination needed for such a project. If there is no knowable DSA, both sides may refrain from attacking each other. Armstrong et al. have created a model of the role of AI and mutual knowledge (Armstrong, Bostrom, & Shulman, 2016). Bostrom has also written about the topic in his article about AI openness (Bostrom, 2017). A semi-stable solution consisting of two AIs may appear, as predicted by Lem (1959) and previously discussed by us (Turchin & Denkenberger, 2018b). Such a balance between two superpowers may work as a global AI Nanny, but much less effectively, as both sides may try to rush to develop superintelligent AI to obtain an insurmountable advantage. Narrow AI provides a unique opportunity for knowable DSA. For example, the creators of cryptological bombe were not only able to break the codes of the enemy, but they probably know that they outperformed the code breaking technologies of the Axis, as the Axis didn’t mention the existence of their own code breaking and, more obviously, didn’t start to use harder codes, which they would have done if they had similar code-breaking technology. A Narrow AI-based DSA, based on “informational domination” creates a unique opportunity for an almost peaceful world takeover that also includes AI Police able to prevent the creation of unauthorized superintelligent AIs. 4. AI-empowered reconnaissance organization of a nuclear superpower is the most probable place of origin of a Narrow AI DSA =========================================================================================================================== 4.1. Advantages of a secret Narrow AI program inside the government ------------------------------------------------------------------- During discussions at MIRI (at the time, the Singularity Institute) in the 2000s, the idea that government and military structures would be interested in creating superintelligent AI was dismissed, because it was considered that the governments were too stupid to understand future AI capabilities, and thus creation of AI in a small private company was regarded more likely. But now it certainly not the case. There are several reasons why a Narrow AI-driven decisive strategic advantage could be achieved inside the governmental structure of the large nuclear superpowers, and moreover, inside a secret intelligence and data crunching agency, similar to the National Security Agency (NSA) of the US. A nuclear superpower is already interested in world domination, or at least interested in preventing domination by other players. If geopolitics can be modeled as a strategic game, Narrow AI will help to achieve advantage in such game, as existing Narrow AIs demonstrate significantly superhuman abilities in winning in complex games, similar to the games for world dominance, like Go. A nuclear superpower has almost unlimited money for secret AI project compared with startups and commercial corporations. Historically, the data-crunching capabilities of secret services have outperformed civilian applications. An AI of the same power as a civilian one but in the hands of a nuclear superpower could dramatically outperform the civilian AI. Military AI could leverage several non-AI advantages in the hands of the superpower: access to the nuclear weapons, large computational resources, nets of sensors, pools of big data, a large concentration of experienced researchers, and other secret state programs. Such a secret government AI organization could take advantage of the openness in the field of AI, as it could absorb information about the advances of others, but would not be not legally obliged to share its own achievements. Thus, it would always outperform the current state of public knowledge. Governmental organizations have used this type of advantage before to dominate in cryptography. 4.2 Existing governmental and intelligence Narrow AI projects according to open sources --------------------------------------------------------------------------------------- When we speak about Narrow AI inside a reconnaissance organization, we mean AI as a technology which increases the efficiency of data crunching within an organization which already has many advantages: very powerful instruments to collect data, money, access to secret technology, and attracts the best minds, as well as ability to educate and train them according to its standards. The US NSA has been described as the world's largest single employer of [mathematicians](https://en.wikipedia.org/wiki/Mathematician) (and there are several other computer-related security agencies in the US) (Love, 2014). The NSA employs around 40 000 people (Rosenbach, 2013) and has a budget of around 10 billion USD. For comparison, Google employs 72 000 thousand people in 2016 (Statista, 2018). NSA works on world simulations with humans (Faggella, 2013) and has vowed to use AI (B. Williams, 2017). Wired has reported that “MonsterMind, like the film version of Skynet, is a defense surveillance system that would instantly and autonomously neutralize foreign cyberattacks against the US, and could be used to launch retaliatory strikes as well” (Zetter, 2015). An interesting overview of governmental data crunching is presented in the article “The New Military-Industrial Complex of Big Data Psy-Ops” (Shaw, 2018). It was reported that the CIA runs 137 secret AI projects (Jena, 2017). However, it is useless to search open data about the most serious AI projects aimed at world domination, as such data will doubtless be secret. An example of a Narrow AI system which could be implemented to achieve a DSA is *Palantir*, which was used for so-called “predictive policing technology” (Winston, 2018). Palantir is an instrument to search large databases about people and find hidden connections. Such a system also probably facilitates the collective intelligence of a group: conversation support Narrow AI may record and transcribe conversation on the fly, suggest supporting links, generate ideas for brainstorming and works as a mild Oracle AI in narrow domains. We don’t claim here that the Palantir is an instrument intended to take over the world, but that a Narrow AI providing a decisive strategic advantage may look much like it. Another illustrative example of the Narrow AI systems we are speaking about is the Chinese *SenseTime*, which stores data describing hundreds of millions of human faces and is used for applications like the Chinese social credit system (Murphy, 2018). 4.3. Who is winning the Narrow AI race? --------------------------------------- It looks like the US is losing the momentum to implement any possible strategic advantage in Narrow AI for political reasons: the conflict of the Trump administration with other branches of power; Snowden-type leaks resulting in public outcry; and the campaign against military AI collaboration with the government within Google (Archer, 2018). If this is the case, Chinese could take this advantage later, as their relationship with private organizations is more structured, political power is more centralized and ethical norms are different (Williams, 2018). There are several other powerful intelligence agencies of nuclear powers, like Russian or Israel, which could do it, though the probability is lower. However, recent Narrow AI empowered election manipulation has happened not through direct action by governments but via a small chain of private companies (Facebook and Cambridge Analytica). This demonstrates that Narrow AI may be used to obtain global power via manipulation of elections. In some sense, a world takeover using AI has already happened, if we count the efforts of Cambridge Analytica in the US election. But it is unlikely that Russian hackers combined with Russian intelligence services have the decisive strategic advantage in Narrow AI. What we observe looks like more of a reckless gamble based on a small temporary advantage. 5. Plan of implementation of AI police via Narrow AI advantage ============================================================== 5.1. Steps of implementing of AI safety via Narrow AI DSA --------------------------------------------------------- The plan is not what we recommend, but just the most logical way of action for a hypothetical “rational” agent. Basically, this plan consists of the following steps: 1) Gaining a knowable decisive advantage. 2) Implementing it for the world takeover. 3) Creating a global surveillance system (AI Police) that controls any possible sources of global risk, including biological risks, nuclear weapons and unauthorized research in AI. 4) Ban advanced AI research altogether or slowly advance it via some safe path. While the plan is more or less straightforward, its implementation could be both dangerous and immoral. Its main danger is that the plan means starting a war against the whole world without an infinitely large advantage that could be ensured only via superintelligence. War is always violent and unpredictable. We have written previously about the dangers of military AI (Turchin & Denkenberger, 2018b). There is nothing good about such a plan; it would be much better if all countries would instead peacefully contribute to the UN and form a “committee for prevention of global risks”. This is unlikely to happen now but may occur if an obvious small risk of a global catastrophe appears, such as an incoming asteroid or a dangerous pandemic. The problem of the creation of such a committee requires additional analysis into how to use the momentum of emerging global risks to help such a committee to form, become permanent, and act globally without exceptions. Even if such a committee were peacefully created, it would still need AI Police to monitor dangerous AI research. 5.2. Predictive AI Police based on Narrow AI: what and how to control --------------------------------------------------------------------- Even if world domination is reached using Narrow AI, such domination is not final solution, as the dominating side should be able to take care of all global problems, including climate change, global catastrophic risks and, first of all, the risks of the appearance of another, even more sophisticated or superintelligent AI which could be unfriendly. We will call “AI Police” a hypothetical instrument which is able to prevent the appearance of dangerous AI research anywhere on the globe. There are two interconnected questions about AI police: what and how should be monitored? Such a system should be able to identify researchers or companies involved in illegal AI research (assuming that the creation of superintelligent AI is banned). AI police instruments should be installed in every research center which presumably has such capabilities, and all such centers or researchers should be identified. Similar systems already was suggested to search for hackers (Brenton, 2018). AI police may identify signs of potentially dangerous activity (like smoke as a sign of fire). Palantir was used in New Orleans for “predictive policing”, where potential criminals were identified via analysis of their social network activity and then monitored more closely (Winston, 2018). Such an AI Police system will do all the same things that intelligence agencies are doing now; the main difference is that there will be no blind spots. The main problem is how to create such a system so it does not have a blind spot in its center, which often happens with overcentralized systems. Maybe such system could be created without centralization, based instead on ubiquitous transparency, or some type of net horizontal solution. Many possible types of Narrow AI with a DSA, e.g. one based on informational domination via superiority of the information gathering and data crunching technology, could be directly transformed into AI Police. Other possible types, like a Narrow AI winner in the nuclear strategic game, could not be used for policing. In that case, additional solutions should be quickly invented. 6. Obstacles and dangers ======================== 6.1. Catastrophic risks ----------------------- If one side wrongly estimated its advantage, the attempt to take over the world may result in world war. In addition, after a successful world takeover, a global totalitarian government, “Big Brother”, may be formed. Bostrom has described such an outcome as an existential risk (Bostrom, 2002). Such a world government may indulge in unlimited corruption and ultimately fail catastrophically. Attempts to fight such a global government may produce something another risk, like catastrophic terrorism. If the “global government” fails to implement more advanced forms of AI, it may be not able to foresee future global risks; however, if it does try to implement advanced forms of AI, a new level of AI Control problems will appear. Such a world government may be not the best approach to solve it. Not every attempt at global takeover via Narrow AI would necessarily be aimed at prevention of superintelligent AI. It is more likely to be motivated by some limited set of nationalistic or sectarian goals of the perpetrator, and thus, even after a successful takeover, the AI Safety problems will continue to be underestimated. However, as the power of Narrow AI will be obvious after such a takeover, control over other AI projects will then be implemented. 6.2. Mafia-state, corruption, and the use of the governmental AI by private individuals --------------------------------------------------------------------------------------- While a bona fide national superpower could be imagined as a rational and conservative organization, in reality, governmental systems could be corrupted by people with personal egoistic goals, willing to take risks, privatize profits, and socialize losses. A government could be completely immersed in corruption, perhaps called a mafia-state (Naím, 2012). The main problem with such a corrupted organization is that its main goal is self-preservation and profit in the near-term mode, which lower quality of strategic decisions. One example is how Cambridge Analytica was hired by Russian oligarchs to manipulate elections in US and Britain, but these oligarchs themselves acted based on their local interests (Cottrell, 2018). Conclusion. Riding the wave of the AI revolution to a safer world ================================================================= Any AI safety solution should be implementable, that is, not contradict the general tendency of world development. We do not have 100 years to sit in a shrine and meditate on a provable form of AI safety (Yampolskiy, 2016): we need to take advantage of existing tendencies in AI development. The current tendencies are that Narrow AI is advancing while AGI is lagging. This creates the possibility of a Narrow AI-based strategic advantage, where Narrow AI is used to empower a group of people that also has access to nation-state scale resources. Such an advantage will have a small window of opportunity, because there is fierce competition in AI research and AGI is coming. The group with must make a decision: will it use this advantage for world domination, which carries the risk of starting a world war, or will it wait and see how the situation will develop? Regardless of the risks, this Narrow AI-based approach could be our only chance to stop the later creation of a hostile non-aligned superintelligence. AlexMennen. (2017). Existential risk from AI without an intelligence explosion. Retrieved from http://lesswrong.com/lw/p28/existential\_risk\_from\_ai\_without\_an\_intelligence/ Altshuller, G. S. (1999). *The innovation algorithm: TRIZ, systematic innovation and technical creativity*. Technical Innovation Center, Inc. Arbital. (2017). Advanced agent. Arbitral. Retrieved from https://arbital.com/p/advanced\_agent/ Archer, J. (2018, May 31). Google draws up guidelines for its military AI following employee fury. *The Telegraph*. Retrieved from https://www.telegraph.co.uk/technology/2018/05/31/google-draws-guidelines-military-ai-following-employee-fury/ Armstrong, S., Bostrom, N., & Shulman, C. (2016). Racing to the precipice: a model of artificial intelligence development. *AI and Society*, *31*(2), 201–206. https://doi.org/10.1007/s00146-015-0590-y Bostrom, N. (2002). Existential risks: Analyzing Human Extinction Scenarios and Related Hazards. *Journal of Evolution and Technology, Vol. 9, No. 1 (2002).* Bostrom, N. (2014). *Superintelligence*. Oxford: Oxford University Press. Bostrom, N. (2017). Strategic Implications of Openness in AI Development. *Global Policy*, *8*(2), 135–148. Brenton, L. (2018). Will Artificial Intelligence (AI) Stop Hacker Attacks? *Stay Safe Online*. Retrieved from https://staysafeonline.org/blog/will-artificial-intelligence-ai-stop-hacker-attacks/ Christiano, P. (2016). Prosaic AI alignment. Retrieved from https://ai-alignment.com/prosaic-ai-control-b959644d79c2 Cottrell, R. (2018, March 27). Why the Cambridge Analytica scandal could be much more serious than you think: *The London Economic*. Retrieved from https://www.thelondoneconomic.com/opinion/why-the-cambridge-analytica-scandal-could-be-much-more-serious-than-you-think/27/03/ De Spiegeleire, S., Maas, M., & Sweijs, T. (2017). Artificial intelligence and the future of defence. The Hague Centre for Strategic Studies. Retrieved from http://www.hcss.nl/sites/default/files/files/reports/Artificial%20Intelligence%20and%20the%20Future%20of%20Defense.pdf Ding, J. (2018). Deciphering China’s AI Dream. Faggella, D. (2013, July 28). Sentient World Simulation and NSA Surveillance - Exploiting Privacy to Predict the Future? -. *TechEmergence*. Retrieved from https://www.techemergence.com/nsa-surveillance-and-sentient-world-simulation-exploiting-privacy-to-predict-the-future/ Goertzel, B. (2012). Should Humanity Build a Global AI Nanny to Delay the Singularity Until It’s Better Understood? *Journal of Consciousness Studies, 19, No. 1–2, 2012, Pp. 96–111*. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.352.3966&rep=rep1&type=pdf Hanson, R. (2016). *The Age of Em: Work, Love, and Life when Robots Rule the Earth*. Oxford University Press. Jena, M. (2017, September 11). OMG! CIA Has 137 Secret Projects Going In Artificial Intelligence. Retrieved April 10, 2018, from https://techviral.net/cia-secret-artificial-intelligence-projects/ Kerber, L. L., & Hardesty, V. (1996). *Stalin’s Aviation Gulag: A Memoir of Andrei Tupolev and the Purge Era*. Smithsonian Institution Press Washington, DC. Krakovna, V. (2015, November 30). Risks from general artificial intelligence without an intelligence explosion. Retrieved March 25, 2018, from https://vkrakovna.wordpress.com/2015/11/29/ai-risk-without-an-intelligence-explosion/ Kushner, D. (2013). The real story of stuxnet. *IEEE Spectr. 50, 48 – 53*. Lem, S. (1959). *The investigation*. Przekrój, Poland. Love, D. (2014). Mathematicians At The NSA - Business Insider. Retrieved from https://www.businessinsider.com/mathematicians-at-the-nsa-2014-6 Maxwell, J. (2017, December 31). Friendly AI through Ontology Autogeneration. Retrieved March 10, 2018, from https://medium.com/@pwgen/friendly-ai-through-ontology-autogeneration-5d375bf85922 Midler, N. (2018). What is ‘Stanford Analytica’ anyway? – The Stanford Daily. *The Standford Daily*. Retrieved from https://www.stanforddaily.com/2018/04/10/what-is-stanford-analytica-anyway/ Millett, P., & Snyder-Beattie, A. (2017). Human Agency and Global Catastrophic Biorisks. *Health Security*, *15*(4), 335–336. Muehlhauser, L., & Salamon, A. (2012). Intelligence Explosion: Evidence and Import. *Eden, Amnon; Søraker, Johnny; Moor, James H. The Singularity Hypothesis: A Scientific and Philosophical Assessment. Berlin: Springer.* Murphy, M. (2018, April 9). Chinese facial recognition company becomes world’s most valuable AI start-up. *The Telegraph*. Retrieved from https://www.telegraph.co.uk/technology/2018/04/09/chinese-facial-recognition-company-becomes-worlds-valuable-ai/ Naím, M. (2012). Mafia states: Organized crime takes office. *Foreign Aff.*, *91*, 100. Oberhaus, D. (2017). Watch ‘Slaughterbots,’ A Warning About the Future of Killer Bots. Retrieved December 17, 2017, from https://motherboard.vice.com/en\_us/article/9kqmy5/slaughterbots-autonomous-weapons-future-of-life Ought. (2018). Factored Cognition (May 2018) | Ought. Retrieved July 19, 2018, from https://ought.org/presentations/factored-cognition-2018-05 Perez, C. E. (2017, September 10). The West in Unaware of The Deep Learning Sputnik Moment. Retrieved April 6, 2018, from https://medium.com/intuitionmachine/the-deep-learning-sputnik-moment-3e5e7c41c5dd Preskill, J. (2012). Quantum computing and the entanglement frontier. *ArXiv:1203.5813 [Cond-Mat, Physics:Quant-Ph]*. Retrieved from http://arxiv.org/abs/1203.5813 Rosenbach, M. (2013). Prism Leak: Inside the Controversial US Data Surveillance Program. *SPIEGEL ONLINE*. Retrieved from http://www.spiegel.de/international/world/prism-leak-inside-the-controversial-us-data-surveillance-program-a-904761.html Shaw, T. (2018, March 21). The New Military-Industrial Complex of Big Data Psy-Ops. Retrieved April 10, 2018, from https://www.nybooks.com/daily/2018/03/21/the-digital-military-industrial-complex/ Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., … Hassabis, D. (2017). Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm. *ArXiv:1712.01815 [Cs]*. Retrieved from http://arxiv.org/abs/1712.01815 Sotala, K. (2016). Decisive Strategic Advantage without a Hard Takeoff. Retrieved from http://kajsotala.fi/2016/04/decisive-strategic-advantage-without-a-hard-takeoff/#comments Sotala, K. (2018). Disjunctive scenarios of catastrophic AI risk. *Artificial Intelligence Safety And Security, (Roman Yampolskiy, Ed.), CRC Press*. Retrieved from http://kajsotala.fi/assets/2017/11/Disjunctive-scenarios.pdf Sotala, K., & Valpola, H. (2012). Coalescing minds: brain uploading-related group mind scenarios. *International Journal of Machine Consciousness*, *4*(01), 293–312. Statista. (2018). Number of Google employees 2017. Retrieved July 25, 2018, from https://www.statista.com/statistics/273744/number-of-full-time-google-employees/ Templeton, G. (2017). Elon Musk’s NeuraLink Is Not a Neural Lace Company. Retrieved February 14, 2018, from https://www.inverse.com/article/30600-elon-musk-neuralink-neural-lace-neural-dust-electrode Tetlock, P. E., & Gardner, D. (2016). *Superforecasting: The Art and Science of Prediction* (Reprint edition). Broadway Books. Timerbaev, R. (2003). History of the international control of the nuclear energy. (К истории планов международного контроля над атомной энергией). *История* *Советского* *Атомного* *Проекта (40-е — 50-е* *Годы): Междунар. Симп.; Дубна, 1996. Труды. Т. 3. — 2003*. Retrieved from http://elib.biblioatom.ru/text/istoriya-sovetskogo-atomnogo-proekta\_t3\_2003/go,214/ Turchin, A. (2017). *Human upload as AI Nanny*. Turchin, A., & Denkenberger, D. (2017a). *Global Solutions of the AI Safety Problem*. manuscript. Turchin, A., & Denkenberger, D. (2017b). *Levels of self-improvment of AI*. Turchin, A., & Denkenberger, D. (2018a). Could slaughterbots wipe out humanity? Assessment of the global catastrophic risk posed by autonomous weapons. *Under Review in Journal of Military Ethics*. Turchin, A., & Denkenberger, D. (2018b). Military AI as convergent goal of the self-improving AI. *Artificial Intelligence Safety And Security, (Roman Yampolskiy, Ed.), CRC Press*. Turchin, A., Green, B., & Denkenberger, D. (2017). Multiple Simultaneous Pandemics as Most Dangerous Global Catastrophic Risk Connected with Bioweapons and Synthetic Biology. *Under Review in Health Security*. Welchman, G. (1982). *The hut six story: breaking the enigma codes*. McGraw-Hill Companies. Williams, B. (2017). Spy chiefs set sights on AI and cyber -. *FCW*. Retrieved from https://fcw.com/articles/2017/09/07/intel-insa-ai-tech-chiefs-insa.aspx Williams, G. (2018, April 16). Why China will win the global race for complete AI dominance. *Wired UK*. Retrieved from https://www.wired.co.uk/article/why-china-will-win-the-global-battle-for-ai-dominance Winston, A. (2018, February 27). Palantir has secretly been using New Orleans to test its predictive policing technology. *The Verge*. Retrieved from https://www.theverge.com/2018/2/27/17054740/palantir-predictive-policing-tool-new-orleans-nopd Yampolskiy, R. (2016). Verifier Theory and Unverifiability. Retrieved from https://arxiv.org/abs/1609.00331 Yampolsky, R., & Fox, J. (2013). Safety engineering for artificial general intelligence. *Topoi*, *32*, 217–226. Yudkowsky, E. (2008). *Artificial Intelligence as a Positive and Negative Factor in Global Risk, in Global Catastrophic Risks*. (M. M. Cirkovic & N. Bostrom, Eds.). Oxford University Press: Oxford, UK. Zetter, K. (2015). So, the NSA Has an Actual Skynet Program. *WIRED*. Retrieved from https://www.wired.com/2015/05/nsa-actual-skynet-program/ Митин, В. (2014). Нейронет (NeuroWeb) станет следующим поколением Интернета. *PC Week. Идеи* *и* *Практики* *Автоматизации*, *17*.
f7a6a5b8-06bd-4af0-882c-22a36fd879ff
trentmkelly/LessWrong-43k
LessWrong
An adversarial example for Direct Logit Attribution: memory management in gelu-4l Please check out our notebook for figure recreation and to examine your own model for clean-up behavior. Produced as part of ARENA 2.0 and the SERI ML Alignment Theory Scholars Program - Spring 2023 Cohort Fig 5: Correlation between DLA of writer head and DLA of [clean-up heads output dependent on V-composition with the writer head]. The negative correlation coefficient r suggests that output of a writer node is consistently removed from the residual stream by subsequent clean-up nodes. See section Implication for Direct Logit Attribution. Overview In this post, we provide concrete evidence for memory management or clean-up in a 4-layer transformer gelu-4l. We show examples where Direct Logit Attribution (DLA) is misleading because it does not account for the clean-up. In the Introduction, we define what we mean by clean-up behavior and provide a quick recap on DLA. In the section Evidence for Clean-up Behavior we identify specific nodes that write and remove information from the residual stream. Based on what we learned about the clean-up, we select prompts that result in misleading DLA results in the Implication for Direct Logit Attribution section. Introduction Clean-up behavior Previously in A Mathematical Framework for Transformer Circuits the authors suggested a mechanism for memory management and speculated it can occur because of high demand on residual stream bandwidth. We define clean-up behavior, in which attention heads and MLPs (which we collectively call nodes) clear information from the residual stream that is only used in early layers of the network. We characterize clean-up behavior as four steps during a forward pass: 1. A writer node or embedding writes a specific direction to the residual stream 2. Subsequent nodes use this direction for further computation 3. A clean-up node clears this direction by writing its negative value to the residual stream 4. The direction is used in the later part of the model, in one or two of the follo
52cd78f7-60c4-452e-a877-101df5c65add
trentmkelly/LessWrong-43k
LessWrong
The Game of Masks Epistemic Status: Endorsed Content Warning: Antimemetic Biasing Hazard, Debiasing hazard, Commitment Hazard Part of the Series: Open Portals Author: Octavia 0. So Scott has been talking about Lacan lately and I’ve been honestly pretty impressed by how hard he seemed to bounce off it. Impressed enough to get me to crawl out of my hole and write this essay that I've been putting off for the last eight months. After spending so many words of the Sadly Porn review talking about why someone might tend towards obscurantism and what sorts of things they might be gesturing at when they say things like “people think the giving tree is a mother” he somehow manages completely to miss the conceptual forest for the giving trees. Congrats Scott, you got the malformed version of the antimeme and were turned back by the confusion spell.  I think Lacan is actually very important and an understanding of Lacanian insights can have a tremendous amount of predictive power when interacting with others. It’s also something you can totally weaponize, and I think that is part of what leads the psychoanalysts to tend towards obscurantism and vaguely gesturing in the direction of what they really mean. They also just seem to like their jargon, and it’s not like the rats are ones to talk when it comes to that.  So, first: The Mother is not literally your mother, The Father is not literally your father, The Phallus is not literally your dick, this is a realm of mirrors and illusions and nothing is as it seems. It’s impolite to point out the thing that everyone hasn’t agreed not to talk about, but let's do it anyway, I’m going to strip away the metaphor and give it to you like Roshi gives it to his student and we’ll see if that takes or just confuses you further.  This is all about symbols. It’s about the effects of symbol systems on cognition and how the evolution of symbols and concepts in the mind of a child affects how they are able to perceive and engage with themselves and the world.
d4c86ac4-85ac-431f-a7d1-e4e817f84292
trentmkelly/LessWrong-43k
LessWrong
Unifying the Simulacra Definitions Epistemic Status: Confident this is the perspective I find most useful. This is intended to both be a stand-alone post and to be the second post in the Simulacra sequence, with the first being Simulacra Levels and their Interactions. It should be readable on its own, but easier having read the previous post. Simulacra levels are difficult to understand. This is not without cause. This is complex and bizarre stuff. Simulacra levels are a map of the metaphors we use to create metaphoric maps of both territory and the map itself. The text that coined the term Simulacra levels does not help matters. The term was first referenced locally by Ben Hoffman in this post, but this was not the original source. The original source of the term is a super-dense work of French philosophy. It requires the reader to pause after every sentence. It’s not clear that a proper review would be shorter than the book itself.  Thus, I’m still working through the book. The more I read Jean Baudrillard‘s further assertions, the less they seem deserving of engagement. He is opposed for nonsensical reasons not only to the concept of capitalism, but the concepts of money, value and trade, and even urbanization and mass production. He blames these for the rise of simulacra, whereas they are the primary forces opposed to simulacra.  Upon parsing many of his super-dense sentences, I find many of them to be outright false. I find many others to be based on models and frameworks very different from my own, and that are assumed rather than specified in the text. The idea that capitalism isn’t the cause of all the world’s problems (never mind whether it’s the solution) does not seem to parse in his mind. I find many others to be downright absurd, or to be carrying water for the agendas of History’s Greatest Villains.  This is a case where I strongly endorse taking the concepts that are useful and leaving the remaining giant mess behind.  Baudrillard’s definition will be kept. Beyond that, I’m
d6f3d037-4918-4826-b4f8-cb9be9278fd3
trentmkelly/LessWrong-43k
LessWrong
The Controls are Lying: A Note on the Memetic Hazards of Video Games [Link] Chris Pruett writes on the Robot Invader blog: > Good player handling code is often smoke and mirrors; the player presses buttons and sees a reasonable result, but in between those two operations a whole lot of code is working to ensure that the result is the best of many potential results.  For example, my friend Greggman discovered that Mario 3's jumping rules change depending on whether or not a level has slopes in it. Halo's targeting reticle famously slows as it passes over an enemy to make it easier to target with an analog stick without using an auto-aim system. When Spider-Man swings, he certainly does not orient about the spot where his web connects to a building (at least, he didn't in the swinging system I wrote). > > Good player handling code doesn't just translate the player's inputs into action, it tries to discern the player's intent. Once the intended action has been identified, if the rules of the game allow it, good player handling code makes the action happen–even if it means breaking the rules of the simulation a little. The goal of good handling code isn't to maintain a "correct" simulation, it's to provide a fun game. It sucks to miss a jump by three centimeters. It sucks to take the full force of a hit from a blow that visually missed. It sucks to swing into a brick wall at 80 miles per hour instead of continuing down the street. To the extent that the code can understand the player's intent, it should act on that intent rather than on the raw input. Do what I mean, not what I say. I suppose this explains why I am better at arcade bowling games than I am at actual bowling. More seriously, while I had some vague awareness of this, I am slightly surprised at the breadth (Mario 3!?) and depth to which this "control re-interpretation" takes place.
d61d4048-cb91-42c4-b2fb-080d9438e9ae
trentmkelly/LessWrong-43k
LessWrong
Reinforcement Learning Study Group Hey everyone,  my name is Kay. I'm new to the forum and I came here with a specific goal in mind:  > I'm putting together a crew for a reinforcement learning study group.    Main objectives:  * Mathematical Foundations: We will work through key passages in Sutton & Barto's Book to get a good foundation of RL * Research Papers:  We will follow the  "Spinning up in Deep Reinforcement Learning" by OpenAI resource to select key research papers in RL to read. * Code: The above resource can also be used to explore main RL algorithms and ideas which we'll aim to program from scratch in Pytorch/Tensorflow * Get Practical Skills: In the long term, the goal of this group is to prepare ourselves for work in AI Alignment and adjacent fields as engineers, researchers etc. (Work at DeepMind, OpenAI, MIRI, FHI, etc.)   Important Self-Selection: It is important for me to be surrounded by ambitious and self-motivated people who are reliable, friendly, and helpful. I expect members of the study group to invest a good amount of time to study and even more to try to teach one another. Members who only plan on participating passively will be asked to become more active or leave the group.  > Think of the study group as a sports team. The goal is to put the best people together on the field. Those who aren't ready to play will have to sit on the sideline. Other helpful prerequisites are being able to program in Python 3, some knowledge of Probability and Statistics, as well as Linear Algebra and Calculus. In case this seems a little intimidating, don't worry, err on the side of joining the group. You'll pretty soon discover whether you're a good fit or not. Remember, we're all beginners who are united by the goal of becoming better.   Becoming our Coach: In the above spirit, I'm also looking for a more advanced coach, someone who knows the terrain of reinforcement learning relatively well and who's willing to guide the teams' efforts towards a more fruitful, albeit chal
293389b2-15de-4d8e-bb55-dc564293b158
trentmkelly/LessWrong-43k
LessWrong
How long does it take to become Gaussian? The central limit theorems all say that if you convolve stuff enough, and that stuff is sufficiently nice, the result will be a Gaussian distribution. How much is enough, and how nice is sufficient?  Identically-distributed distributions converge quickly For many distributions d, the repeated convolution d∗d∗⋯∗d looks Gaussian. The number of convolutions you need to look Gaussian depends on the shape of d. This is the easiest variant of the central limit theorem: identically-distributed distributions.  The uniform distribution converges real quick: The result of uniform(1, 2) * uniform(1, 2) * ... * uniform(1, 2), with 30 distributions total. This plot is an animated version of the plots in the previous post. The black curve is the Gaussian distribution with the same mean and variance as the red distribution. The more similar red is to black, the more Gaussian the result of the convolutions is. The numbers on the x axis are increasing because the mean of f∗g is the sum of the means of f and g, so if we start with positive means, repeated convolutions shoot off into higher numbers. Similar for the variance - notice how the width starts as the difference between 1 and 2, but ends with differences in the tens. You can keep the location stationary under convolution by starting with a distribution centered at 0, but you can't keep the variance from increasing, because you can't have a variance of 0 (except in the limiting case). Here's a more skewed distribution: beta(50, 1). beta(50, 1) is the probability distribution that represents knowing that a lake has bass and carp, but not how many of each, and then catching 49 bass in a row. It's fairly skewed! This time, after 30 convolutions, we're not quite Gaussian - the skew is still hanging around. But for a lot of real applications, I'd call the result "Gaussian enough". beta(50, 1) convolved with itself 30 times.   A similar skew in the opposite direction, from the exponential distribution: exp(20) I was surprise
91296907-0bab-49fa-8eb3-00dc74a64bfc
trentmkelly/LessWrong-43k
LessWrong
On oxytocin-sensitive neurons in auditory cortex (For the big picture of how I wound up on this topic, see Symbol Grounding and Human Social Instincts. But I wound up feeling like oxytocin-sensitive neurons in auditory cortex are NOT an important piece of that particular puzzle.) (I tried to minimize neuroscience jargon (as usual), but I don’t really expect that non-neuroscientists would want to read this post anyway.) I just read the paper “Oxytocin enables maternal behaviour by balancing cortical inhibition” (free PDF link) by Bianca J. Marlin, Mariela Mitre, James A. D’amour, Moses V. Chao, & Robert C. Froemke at NYU (Nature, 2015). And then I spent a while feeling confused. But I think I’m no longer confused. In this post I’ll explain both why I was confused, and how I got over it. Remarkable artwork! I just love it. Bravo. Background: pup-retrieval behavior The paper concerns a behavior which is amusingly described in the following book excerpt (emphasis added): > After a rat gives birth, she displays a complex repertoire of maternal behaviors. If given paper, she will shred it and use the strips to build a nest. Virgin and early pregnant rats avoid newborn pups, but a mother rat will gather her young into the nest and allow them to suckle, and if any pup wanders away she will promptly retrieve it. Indeed, she will retrieve any pup that she sees close to her nest, whether hers or not, seemingly without limit. If 20 or 30 strange pups are placed in her cage, all will be retrieved, and she will strive to ensure that all are groomed and fed. … These behaviors are expressed after normal vaginal delivery, but are disrupted by interventions that impair oxytocin release … Strikingly, maternal behavior can be induced by injecting small amounts of oxytocin into the brain, both in virgin rats when at the stage of the cycle when estrogen levels are high, and in ovariectomized rats that have been infused with estrogen. —Gareth Leng, The Heart Of the Brain: The Hypothalamus and its Hormones, p191 The Marlin et al.
2ae9173a-c797-4e29-96dd-4f93c41b5675
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] The Conscious Sorites Paradox Today's post, The Conscious Sorites Paradox was originally published on 28 April 2008. A summary (taken from the LW wiki):   > Decoherence is implicit in quantum physics, not an extra law on top of it. Asking exactly when "one world" splits into "two worlds" may be like asking when, if you keep removing grains of sand from a pile, it stops being a "heap". Even if you're inside the world, there may not be a definite answer. This puzzle does not arise only in quantum physics; the Ebborians could face it in a classical universe, or we could build sentient flat computers that split down their thickness. Is this really a physicist's problem? Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was On Being Decoherent, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
5db727b2-0f33-4744-8ffa-6df540530d6b
trentmkelly/LessWrong-43k
LessWrong
Anti-Pascaline satisficer It occurred to me that the anti-Pascaline agent design could be used as part of a satisficer approach. The obvious thing to reduce dangerous optimisation pressure is to make a bounded utility function, with an easily achievable bound. Such as giving them a utility linear in paperclips that maxs out at 10. The problem with this is that, if the entity is a maximiser (which it might become), it can never be sure that it's achieved its goals. Even after building 10 paperclips, and an extra 2 to be sure, and an extra 20 to be really sure, and an extra 3^^^3 to be really really sure, and extra cameras to count them, with redundant robots patrolling the cameras to make sure that they're all behaving well, etc... There's still an ε chance that it might have just dreamed this, say, or that its memory is faulty. So it has a current utility of (1-ε)10, and can increase this by reducing ε - hence building even more paperclips. Hum... ε, you say? This seems a place where the anti-Pascaline design could help. Here we would use it at the lower bound of utility. It currently has probability ε of having utility < 10 (ie it has not built 10 paperclips) and (1-ε) of having utility = 10. Therefore and anti-Pascaline agent with ε lower bound would round this off to 10, discounting the unlikely event that it has been deluded, and thus it has no need to build more paperclips or paperclip counting devices. Note that this is an un-optimising approach, not an anti-optimising one, so the agent may still build more paperclips anyway - it just has no pressure to do so.
71a73cf0-8d0c-45dd-bef6-b72e6170da5e
trentmkelly/LessWrong-43k
LessWrong
Of arguments and wagers Automatically crossposted from ai-alignment.com   (In which I explore an unusual way of combining the two.) Suppose that Alice and Bob disagree, and both care about Judy’s opinion. Perhaps Alice wants to convince Judy that raising the minimum wage is a cost-effective way to fight poverty, and Bob wants to convince Judy that it isn’t. If Judy has the same background knowledge as Alice and Bob, and is willing to spend as much time thinking about the issue as they have, then she can hear all of their arguments and decide for herself whom she believes. But in many cases Judy will have much less time than Alice or Bob, and is missing a lot of relevant background knowledge. Often Judy can’t even understand the key considerations in the argument; how can she hope to arbitrate it? Wagers For a warm-up, imagine that Judy could evaluate the arguments if she spent a long enough thinking about them. To save time, she could make Alice and Bob wager on the result. If both of them believe they’ll win the argument, then they should be happy to agree to the deal: “If I win the argument I get $100; if I lose I pay $100.” (Note: by the end of the post, no dollars will need to be involved.) If either side isn’t willing to take the bet, then Judy could declare the case settled without wasting her time. If they are both willing to bet, then Judy can hear them out and decide who she agrees with. That person “wins” the argument, and the bet: Alice and Bob are betting about what Judy will believe, not about the facts on the ground. Of course we don’t have to stick with 1:1 bets. Judy wants to know the probability that she will be convinced, and so wants to know at what odds the two parties are both willing to bet. Based on that probability, she can decide if she wants to hear the arguments. It may be that both parties are happy to take 2:1 bets, i.e. each believes they have a 2/3 chance of being right. What should Judy believe? (In fact this should always happen at small stakes:
2d8219e5-0c82-4042-907b-c0dfb14d99a8
trentmkelly/LessWrong-43k
LessWrong
Ranked Choice Voting is Arbitrarily Bad Cross posting from https://applieddivinitystudies.com/2020/09/02/ranked-bad/ Recently, there's been headway in adopting Ranked-Choice Voting, used by several states in the 2020 US Democratic presidential primaries and to be adopted by New York City in 2021. For all its virtues, Ranked Choice Voting contains a number of risks, largely due to tactical voting and democratic illegitimacy. First, a quick primer on existing systems. The one we're used to is called Plurality Voting, and is by far the simplest: Each voter casts a vote for one candidate, and the candidate with the most votes wins. Though clear and intuitive, there are several problems best illustrated by example: Tactical Voting for "Realistic Candidates" Say public polls report: * 45% of voters prefer Alice * 45% of voters prefer Bob * 10% of voters prefer Carol No matter how strongly voters support Carol, on election day, they would rather vote for Alice or Bob than "waste" a vote on a candidate who won't win. It's worth asking why Carol was polling so low in the first place, but a common explanation is perpetuation through a party system. If Alice and Bob's parties have won historically, the electorate may be locked into a perpetual two-party system, no matter how compelling a particular third-party candidate happens to be. Loss of Popular Moderate Candidates In another race, voters have real preferences such that: * 50% of voters prefer Alice > Carol > Dave > Bob * 50% of voters prefer Bob > Carol > Dave > Alice Alice and Bob are both despised by half the population, yet one of them is guaranteed to win. Meanwhile, Carol has universal appeal, but would receive 0 votes in a plurality election, no matter how polarizing the other candidates are. In a more extreme case, we might have: * 25% of voters prefer Alice > Bob > Carol > Dave * 25% of voters prefer Bob > Alice > Carol > Dave * 30% of voters prefer Dave > Alice > Carol > Bob * 20% of voters prefer Carol > Alice > Bob > Dave Ali
97c85f9e-9d83-4412-bb69-813f607b8804
trentmkelly/LessWrong-43k
LessWrong
A voting theory primer for rationalists What is voting theory? Voting theory, also called social choice theory, is the study of the design and evaulation of democratic voting methods (that's the activists' word; game theorists call them "voting mechanisms", engineers call them "electoral algorithms", and political scientists say "electoral formulas"). In other words, for a given list of candidates and voters, a voting method specifies a set of valid ways to fill out a ballot, and, given a valid ballot from each voter, produces an outcome. (An "electoral system" includes a voting method, but also other implementation details, such as how the candidates and voters are validated, how often elections happen and for what offices, etc. "Voting system" is an ambiguous term that can refer to a full electoral system, just to the voting method, or even to the machinery for counting votes.) Most voting theory limits itself to studying "democratic" voting methods. That typically has both empirical and normative implications. Empirically, "democratic" means: * There are many voters * There can be more than two candidates In order to be considered "democratic", voting methods generally should meet various normative criteria as well. There are many possible such criteria, and on many of them theorists do not agree; but in general they do agree on this minimal set: * Anonymity; permuting the ballots does not change the probability of any election outcome. * Neutrality; permuting the candidates on all ballots does not change the probability of any election outcome. * Unanimity: If voters universally vote a preference for a given outcome over all others, that outcome is selected. (This is a weak criterion, and is implied by many other stronger ones; but those stronger ones are often disputed, while this one rarely is.) * Methods typically do not directly involve money changing hands or other enduring state-changes for individual voters. (There can be exceptions to this, but there are good reasons to want to unde
d0f67043-42ba-4858-80e0-65c382c42d29
trentmkelly/LessWrong-43k
LessWrong
Meetup : San Diego experimental meetup Discussion article for the meetup : San Diego experimental meetup WHEN: 15 January 2012 01:00:00PM (-0800) WHERE: 6380 Del Cerro Blvd. San Diego, CA 92120 We're having a meetup in our usual haunt on Sunday, January 15th at 1pm. Food and drink are available for purchase, though you'll need your ID to get anything alcoholic. In the spirit of developing Rationality Dojo curricula, we're going to test a newly developed training session. We want to see how it works and to get your feedback on how it came across to you. And I think we'll have good fun in the process. :-) If there's time and interest, I'm also willing to continue the discussion from last time by giving a presentation on what I call the Enneagram keys. These are guidelines for interacting with the types that let you (a) build a good relationship with someone of a known type, (b) hit their hot buttons like nothing else (which is really meant to help you know why they get hurt or angry and how to avoid doing that), and (c) open communications and build rapport. Because each Ennea-type has a relatively specific way of reacting to specific keys, you can also use the keys to test hypotheses about someone's type. For instance, if you don't know whether you're dealing with a Four or a Nine, you can try suggesting something optimistic about their future ("Today might be a bit drab, but tomorrow is a new day!"). A Nine will typically respond with something positive (a spacey smile if they're withdrawn or a big grin and nods if they're feeling confident), whereas a Four will typically turn cold and might even give you an eyeroll. So the keys are where the Enneagram most blatantly pays rent. But I'm definitely open to other options! I could stand to train my calibration better, and I have as yet to play any of the calibration games others have brought on occasion. (I'm looking at you, Jennifer!) So! Please, show up, help us develop a solid Rationality Dojo curriculum by joining us in getting stronger, and join
5edb1391-fe5a-4428-87a3-b50076692114
trentmkelly/LessWrong-43k
LessWrong
How I Think, Part Three: Weighing Cryonics These are things I've been thinking about when trying to decide if I should get cryonics. I've sometimes gotten the sense that some rationalists think this is a no-brainer. But since it's a $100k+ decision, I think it deserves quite a bit of thinking. So here is my list of thoughts as I think them: 1: Even if there's not a huge chance it will work, it's probably still worth it. Let's say there's a 10% chance I think it will work. Wouldn't I pay ten times the price to get it as a guarantee? 2: What would my quality of life be after waking up if it did work? There's a chance it would be very, very high. 3: What would I spend the money on if I didn't spend it on this? I'm not sure, really. 4: There's a chance that scientific progress will achieve immortality before I die anyway. 5: There's a chance I'll die tomorrow 6: There's a chance that I'll die the normal way, and then scientific progress will eventually be able to wake me up anyway 7: There's a chance that there actually is an afterlife and it's great, and I don't have to worry about dying anyway 8: There's a chance that there's an afterlife and it is absolutely terrible, in which case I should try even harder to not die 9: Would cryonics prevent/delay an afterlife anyway? 10: How long would I want to live anyway if I was given the choice? It sounds like some people accept the fact that they will die when they get old, maybe they even look forward to it. Maybe that's just because they are in pain? Or because of other problems with life that are solvable? Or just because they've been conditioned to do that somehow? 11: There's a chance that the world will end before I die, so cryonics wouldn't help. 12: I only started thinking about cryonics a couple of years ago. Isn't there a big chance I'll think it doesn't make any sense ten or twenty years from now? 13: Can I get a refund if I change my mind about cryonics before dying? If I use a life insurance plan, can I change the beneficiary? If so, it proba
a9aba7ff-3d8b-4e9e-b741-34823b0d2387
trentmkelly/LessWrong-43k
LessWrong
Meetup : Buffalo Meetup Discussion article for the meetup : Buffalo Meetup WHEN: 17 February 2013 04:00:00PM (-0500) WHERE: SPOT Coffee Delaware Ave & W Chippewa St, Buffalo, NY (Apologies, for the short notice.) Last meetup we talked about making sure your beliefs "pay rent " by constraining anticipation. This time we'll talk about specific examples of ways some beliefs may not even really count as beliefs at all: Belief in Belief - http://lesswrong.com/lw/i4/belief_in_belief/ Professing and Cheering - http://lesswrong.com/lw/i6/professing_and_cheering/ Belief as Attire - http://lesswrong.com/lw/i7/belief_as_attire/ The concepts are pretty similar so I thought I'd just lump them together for this meetup. Read what you can (3 posts is a lot, so don't worry about it if you don't get around to it). In any case, I'll do a cliff-notes summary for everyone. Anyone can attend. Feel free to invite friends who might be interested. We'll also play some cool games too. We're meeting at SPOT this time (I'll have a sign so you can find us easily). Discussion article for the meetup : Buffalo Meetup
97f71bd2-b9dd-48d1-a2a9-a5833e4c9489
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
[Linkpost] Given Extinction Worries, Why Don’t AI Researchers Quit? Well, Several Reasons I've written a blog post for a lay audience, explaining some of the reasons that AI researchers who are concerned about extinction risk have for continuing to work on AI research, despite their worries The apparent contradiction is causing a a lot of confusion among people who haven't followed the relevant discourse closely. In many instances, lack of clarity seems to be leading people to resort to borderline conspiratorial thinking (e.g., about the motives of signatories of the recent [statement](https://www.safe.ai/statement-on-ai-risk)), or to otherwise dismiss the worries as not totally serious. I hope that this piece can help make common knowledge some things that aren’t widely known outside of tech and science circles. As an overview, the reasons I focus on are: 1. Their specific research isn’t actually risky 2. Belief that AGI is inevitable and more likely to go better if you personally are involved 3. Thinking AGI is far enough away that it makes sense to keep working on AI for now 4. Commitment to science for science sake 5. Belief that the benefits of AGI would outweigh even the risk of extinction 6. Belief that advancing AI on net reduces global catastrophic risks, via reducing other risks 7. Belief that AGI is worth it, even if it causes human extinction I'll also note that the piece isn't meant to defend the decision of researchers who continue to work on AI despite thinking it presents extinction risks, nor to criticize them for their decision, but instead to add clarity. If you're interested in reading more, you can follow the link [here](https://medium.com/@daniel_eth/given-extinction-worries-why-dont-ai-researchers-quit-well-several-reasons-a0b6027da4e7). And of course feel free to send the link to anyone who's confused by the current situation.
6dce6b88-2e01-4d8b-a38d-50f8ff633476
trentmkelly/LessWrong-43k
LessWrong
AI Safety field-building projects I'd like to see People sometimes ask me what types of AIS field-building projects I would like to see. Here’s a list of 11 projects. Background points/caveats But first, a few background points. 1. These projects require people with specific skills/abilities/context in order for them to go well. Some of them also have downside risks. This is not a “list of projects Akash thinks anyone can do” but rather a “list of projects that Akash thinks could Actually Reduce P(Doom) if they were executed extremely well by an unusually well-qualified person/team.” 2. I strongly encourage people to reach out to experienced researchers/community-builders before doing big versions of any of these. (You may disagree with their judgment, but I think it’s important to at least have models of what they believe before you do something big.) 3. This list represents my opinions. As always, you should evaluate these ideas for yourself. 4. If you are interested in any of these, feel free to reach out to me. If I can’t help you, I might know someone else who can.  5. Reminder that you can apply for funding from the long-term future fund. You don’t have to apply to execute a specific project. You can apply for career exploration grants, grants that let you think about what you want to do next, and grants that allow you to test out different hypotheses/uncertainties. 6. I sometimes use the word “organization”, which might make it seem like I’m talking about 10+ people doing something over the course of several years. But I actually mean “I think a team of 1-3 people could probably test this out in a few weeks and get something ambitious started here within a few months if they had relevant skills/experiences/mentorship. 7. These projects are based on several assumptions about AI safety, and I won’t be able to articulate all of them in one post. Some assumptions include “AIS is an extremely important cause area” and “one of the best ways to make progress on AI safety is to get talented people working
212be483-ff96-4408-a3ee-c235dd9072da
trentmkelly/LessWrong-43k
LessWrong
Meetup : West LA Meetup - The Resistance (Game) Discussion article for the meetup : West LA Meetup - The Resistance (Game) WHEN: 11 April 2012 07:00:00PM (-0700) WHERE: 10850 West Pico Blvd, Los Angeles, CA 90064 When: 7:00pm - 9:00pm Wednesday, April 11th. Where: The Westside Tavern in the upstairs Wine Bar (all ages welcome), located inside the Westside Pavillion on the second floor, right by the movie theaters. Parking is free for 3 hours. Activity: We will play a classic game of deception and deduction, The Resistance. There will be unmoderated discussion before game play, and afterwards we can talk about the use of games and other methods for training our heuristics. Don't worry if you don't have time to read anything, or even if you've never read any Less Wrong! Bring a friend! The atmosphere is casual, and good, intelligent conversation with friendly people is guaranteed. I will bring a whiteboard with Bayes' Theorem written on it. Discussion article for the meetup : West LA Meetup - The Resistance (Game)
aa1d9717-5e71-43ce-8763-9c328e2ff29e
trentmkelly/LessWrong-43k
LessWrong
Alignment is hard. Communicating that, might be harder Crossposted from the EA Forum: https://forum.effectivealtruism.org/posts/PWKWEFJMpHzFC6Qvu/alignment-is-hard-communicating-that-might-be-harder    Note: this is my attempt to articulate why I think it's so difficult to discuss issues concerning AI safety with non-EAs/Rationalists, based on my experience. Thanks to McKenna Fitzgerald for our recent conversation about this, among other topics.    The current 80,000 Hours list of the world's most pressing problems ranks AI safety as the number one cause in the highest priority area section. And yet, it's a topic never discussed in the news. Of course, that's not because journalists and reporters mind talking about catastrophic scenarios. Most professionals in the field, are perfectly comfortable talking about climate change, wars, pandemics, wildfires, etc. So what is it about AI safety that doesn't make it a legitimate topic for a panel on TV?  The EA consensus is roughly that being blunt about AI risks in the broader public would cause social havoc. And this is understandable; the average person seems to interpret the threats from AI either as able to provoke socio-economic shifts similar to those that occurred because of novel technologies during the Industrial Revolution (mostly concerning losing jobs), or as violently disastrous as in science fiction films (where e.g., robots take over by fighting wars and setting cities on fire).  If that's the case, taking seriously what Holden Karnofsky describes in The Most Important Century as well as what many AI timelines suggest (i.e., that humanity might be standing at a very crucial point in its trajectory) could easily be interpreted in ways that would lead to social collapse just by talking about what the problem concerns. Modern AI Luddites would potentially form movements to "prevent the robots from stealing their jobs". Others would be anxiously preparing to physically fight and so on.  But, even if the point about AGI doesn't get misinterpreted, if, in other
39573a2e-866f-41a5-97c5-62a5839dab60
trentmkelly/LessWrong-43k
LessWrong
SAE feature geometry is outside the superposition hypothesis Written at Apollo Research Summary: Superposition-based interpretations of neural network activation spaces are incomplete. The specific locations of feature vectors contain crucial structural information beyond superposition, as seen in circular arrangements of day-of-the-week features and in the rich structures of feature UMAPs. We don’t currently have good concepts for talking about this structure in feature geometry, but it is likely very important for model computation. An eventual understanding of feature geometry might look like a hodgepodge of case-specific explanations, or supplementing superposition with additional concepts, or plausibly an entirely new theory that supersedes superposition. To develop this understanding, it may be valuable to study toy models in depth and do theoretical or conceptual work in addition to studying frontier models.  Epistemic status: Decently confident that the ideas here are directionally correct. I’ve been thinking these thoughts for a while, and recently got round to writing them up at a high level. Lots of people (including both SAE stans and SAE skeptics) have thought very similar things before and some of them have written about it in various places too. Some of my views, especially the merit of certain research approaches to tackle the problems I highlight, have been presented here without my best attempt to argue for them. What would it mean if we could fully understand an activation space through the lens of superposition? If you fully understand something, you can explain everything about it that matters to someone else in terms of concepts you (and hopefully they) understand. So we can think about how well I understand an activation space by how well I can communicate to you what the activation space is doing, and we can test if my explanation is good by seeing if you can construct a functionally equivalent activation space (which need not be completely identical of course) solely from the information I have gi
1319e381-4e0b-4eec-96de-789a44a7ea8a
trentmkelly/LessWrong-43k
LessWrong
Implementing CDT with optimal predictor systems We consider transparent games between bounded CDT agents ("transparent" meaning each player has a model of the other players). The agents compute the expected utility of each possible action by executing an optimal predictor of a causal counterfactual, i.e. an optimal predictor for a function that evaluates the other players and computes the utility for the selected action. Since the agents simultaneously attempt to predict each other, the optimal predictors form an optimal predictor system for the reflective system comprised by the causal counterfactuals of all agents. We show that for strict maximizers, the resulting outcome is a bounded analogue of an approximate Nash equilibrium, i.e. a strategy which is an optimal response within certain resource constraints up to an asymptotically small error. For "thermalizers" (agents that choose an action with probability proportional to 2uT), we get a similar result with expected utility Es[u] replaced by "free utility" Es[u]+TH(s). Thus, such optimal predictor systems behave like bounded counterparts of reflective oracles. Preliminaries The proofs for this section are given in Appendix A. We redefine E2(ll,ϕ) and E2(ll) to be somewhat smaller proto-error spaces which nevertheless yield the same existence theorems as before. This is thanks to Lemma A.1. Construction 1 Given ϕ∈Φ, denote E2(ll,ϕ) the set of bounded functions δ:N2→R≥0 s.t. ∀ψ∈Φ:ψ≤ϕ⟹Eλkψ[δ(k,j)]=O(ψ(k)−1) Denote E2(ll):=⋂ϕ∈ΦE2(ll,ϕ) Proposition 1 If ϕ∈Φ is s.t. ∃n:liminfk→∞2−knϕ(k)=0, O(ϕ−1) is a proto-error space. For any ϕ∈Φ, E2(ll,ϕ) is an ample proto-error space. When ϕ is non-decreasing, E2(ll,ϕ) is stable. E2(ll) is a stable ample proto-error space. Notation We denote O(ϕ−1∞):=O(ϕ−1)1∞. We allow the same abuse of notation for this symbol as for usual big O notation. For any (poly,rlog)-bischeme ^Q=(Q,rQ,σQ) we use ^σkjQ to denote UrQ(k,j)×σkjQ. For reflective systems R=(Σ,f,μ) we write indices in Σ as superscripts rather than subscripts a
0fcc774d-677a-45b1-ab6f-c39008cc106b
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Wittgenstein and ML — parameters vs architecture Status: a brief distillation of Wittgenstein's book *On Certainty*, using examples from deep learning and GOFAI, plus discussion of AI alignment and interpretability. --- > "That is to say, the questions that we raise and our doubts depend on the fact that some propositions are exempt from doubt, are as it were like hinges on which those turn." > > — Ludwig Wittgenstein, On Certainty > > 1. Deep Learning ================ Suppose we want a neural network to detect whether two children are siblings based on photographs of their face. The network will received two n.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} -dimensional vectors v1 and v2representing the pixels in each image, and will return a value y(v1,v2)∈R which we interpret as the log-odds that the children are siblings. So the model has type-signature Rn+n→R. There are two ways we can do this. 1. We could use an architecture yA(v1,v2)=σ(vT1Av2+b), where — * σ is the sigmoid function * A is an n×n matrix of learned parameters, * b∈R is a learned bias. * This model has n2+1 free parameters. 2. Alternatively, we could use an architecture yU(v1,v2)=σ(vT1(U+UT2)v2+b), where — * σ is the sigmoid function * U is an n×n upper-triangular matrix of learned parameters * b∈R is a learned bias * This model has n2/2+n/2+1 free parameters. Each model has a vector of free parameters θ∈Θ. If we train the model via SGD on a dataset (or via some other method) we will end up with a trained models yθ:Rn+n→R, where y\_:Θ→(Rn+n→R) is the architecture. Anyway, we now have two different NN models, and we want to ascribe beliefs to each of them. Consider the proposition ϕ that siblingness is symmetric, i.e. every person is the sibling of their siblings. What does it mean to say that a model *knows* or *belives* that ϕ. Let's start with a black-box definition of *knowledge* or *belief:* when we say that a model *knows* or *believes* that ϕ, we mean that yθ(v1,v2)=yθ(v2,v1) for all v1,v2∈Rn which look sufficiently like faces. According to this black-box definition, both trained models believe ϕ.  But if we peer inside the black box, we can see that NN Model 1 believes ϕ in a very different way than how NN Model 2 believes ϕ. * For NN Model 1, the belief is encoded in the learned parameters θ∈Θ. * For NN Model 2, the belief is encoded in the architecture itself y\_. These are two different kinds of belief. 2. Symbolic Logic ================= Suppose we use GOFAI/symbolic logic to determine whether two children are siblings. Our model consists of three things — 1. A language L consisting of names and binary familial relations. 2. A knowledge-base Γ consisting of L-formulae. 3. A deductive system ⊢ which takes a set of L-formulae (premises) to a larger set of L-formulae (conclusions).  There are two ways we can do this. 1. We could use a system (L,Γ,⊢) , where — * The language L has names for every character and familial relations parent,child,sibling,grandparent,grandchild,cousin * The knowledge-base Γ has axioms {sibling(Jack,Jill),sibling(x,y)→sibling(y,x)} * The deductive system ⊢ corresponds to first-order predicate logic. 2. Alternatively, we could use a system (L,Γ,⊢), where — * The language L has names for every character and familial relations parent,child,sibling,grandparent,grandchild,cousin * The knowledge-base Γ has axioms {sibling(Jack,Jill)} * The deductive system ⊢ corresponds to first-order predicate logic with an additional logical rule sibling(x,y)⊢sibling(y,x). In this situation, we have two different SL models, and we want to ascribe beliefs to each of them. Consider the proposition ϕ that siblingness is symmetric, i.e. every person is the sibling of their siblings. Let's start with a black-box definition of *knowledge* or *belief:* when we say that a model *knows* or *believes* that ϕ, we mean that Γ⊢sibling(τ1,τ2)→sibling(τ2,τ1) for every pair of closed L-terms τ1,τ2. According to this black-box definition, both models believe ϕ.  But if we peer inside the black box, we can see that SL Model 1 believes ϕ in a very different way than how SL Model 2 believes ϕ. * For SL Model 1, the belief is encoded in the knowledge-base Γ. * For SL Model 2, the belief is encoded in the deductive system ⊢ itself. These are two different kinds of belief. Can you see how they map onto the distinction in the previous section? 3. Wittgenstein =============== In *On Certainty,* Wittgenstein contrasts two different kinds of belief. * Humans have **free beliefs** and **hinge beliefs**. * A human's free beliefs are similar to how NN Model 1 and SL Model 1 believe ϕ. In other words, these are beliefs encoded in our learned parameters θ∈Θ, or in the knowledge-base Γ. * In contrast, a human's hinge beliefs are similar to how NN Model 2 and SL Model 2 believe ϕ. In other words, these are beliefs encoded in the architecture itself y\_, or in the deductive system ⊢ . * Here are some of my free beliefs: + Cairo is the capital of Egypt. + 101 is a prime number. + There are eight planets in the Solar System. + Today is a Thursday. * Here are some of my hinge beliefs: + I am currently on Earth. + Today is not 1943. + Here is my hand + The external world exists. + My memory is at least somewhat reliable over short timespans. * Let's use LessWrong's favourite analogy — **the map and the territory**. + We might say the map *knows* that Manchester is north of Portsmouth because that's what's shown on the map. This would count as a free belief. + We might also say the map *knows* that England is roughly two dimensional — that's also shown on the map. But this would count as a hinge belief, because it's not a free parameter. * Wittgenstein calls these "hinge beliefs" because they must be **fixed**, allowing our  world-model to "swing like a door" throughout the rest of the possibilites. * Hinge beliefs are not like axioms. They aren't foundational, but instead pre-foundational. They are the presuppositions for our conceptual map to connect with the external world whatsoever. * Hinge beliefs are not subject to rational evalutation or empirical testing, but they can be evaluated in other ways. * It's somewhat defective to say "I know ϕ" or "I doubt ϕ" when ϕ is a hinge belief. | | **Perception** | Judgement | | --- | --- | --- | | Free belief | This cat is furry | Today is a Thursday | | Hinge belief | There are three colours | ϕ,ϕ→ψ⊢ψ | 4. Alignment relevance ====================== * Depending on the architecture, randomly initialised neural networks will "know" things. * Determining which hinge beliefs are induced by a neural network architecture is (in general) non-trivial. * Whether a belief is a hinge belief or a free belief will affect — + Capabilities + Safety + Interpretability * The general trend of ML over the past ten years has been towards free beliefs rather than hinge beliefs. If there are less hinges, then the door can swing through a wider space, i.e. the model is more general. * Nonetheless, even the most general architecture must induce some hinge beliefs, because otherwise the models couldn't correspond to any external territory whatsoever. * As a rough rule-of-thumb, I expect that swapping free beliefs with hinge beliefs would make AI more safe and less capable. I'm not sure whether this would be worthwhile on the **safety-capabilities trade-off**, and I'm not sure whether it would make AI more interpretable (but my guess is slightly yes). * If mechanistic interpretability goes well, then we should be able to take a trained neural network with free beliefs, identify certain symmetries/regularities within the parameters, and then convert the model into an equivalent model where those beliefs are now hinges. In other words, we should be able to turn **knowledge stuck in the parameters** to **knowledge stuck in the architecture**.
613ff47b-d96e-4b1d-871b-aff351fbf7e1
StampyAI/alignment-research-dataset/arxiv
Arxiv
AI Safety and Reproducibility: Establishing Robust Foundations for the Neuropsychology of Human Values I Anthropomorphic Design of Superintelligent AI Systems -------------------------------------------------------- There has been considerable discussion in recent years about the consequences of achieving human-level artificial intelligence. In a survey of top-researchers in computer science, an aggregate forecast of 352 scientists assigned a 50% probability of human-level machine intelligence being realized within 45 years. In the same survey, 48% responded that greater emphasis should be placed on minimizing the societal risks of AI, an emerging area of study known as “AI safety” [[1](#bib.bib1)]. A distinct area of research within AI safety concerns software systems whose capacities substantially exceed that of human beings along every dimension, that is, superintelligence [[2](#bib.bib2)]. Within the framework of superintelligence theory, a core research topic known as the *value alignment problem* is to specify a goal structure for autonomous agents compatible with human values. The logic behind the framing of this problem is the following: Current software and AI systems are brittle and primitive, showing little capacity for generalized intelligence. However, ongoing research advances suggest that future systems may someday show fluid intelligence, creativity, and true thinking capacity. Defining the parameters of goal-directed behavior will be a necessary component of designing such systems. Because of the complex and intricate nature of human behavior and values, an emerging train of thought in the AI safety community is that such a goal structure will have to be inferred by software systems themselves, rather than pre-programmed by their human designers. Russell summarizes the notion of indirect inference of human values by stating three principles that should guide the development of AI systems [[3](#bib.bib3)]: 1. The machine’s purpose must be to maximize the realization of human values. In particular, it has no purpose of its own and no innate desire to protect itself. 2. The machine must be initially uncertain about what those human values are. The machine may learn more about human values as it goes along, but it may never achieve complete certainty. 3. The machine must be able to learn about human values by observing the choices that we humans make. In other words, rather than have a detailed ethical taxonomy programmed into them, AI systems should infer human values by observing and emulating our behavior [[4](#bib.bib4), [5](#bib.bib5), [3](#bib.bib3)]. In a recent article, we argued that ideas from affective neuroscience and related fields may play a key role in developing AI systems that can acquire human values. The broader context of this proposal is an inverse reinforcement learning (IRL) type paradigm in which an AI system infers the underlying utility function of an agent by observing its behavior. Our perspective is that a neuroscientific understanding of human values may play a role in characterizing the initially uncertain structure that the AI system refines over time. Having a more accurate initial goal structure may allow an agent to learn from fewer examples. For a system that is actively taking actions and having an impact on the world, a more efficient learning process can directly translate into a lower risk of adverse outcomes. As an example, we suggested that human values could be schematically and informally decomposed into three components: *1) mammalian values, 2) human cognition, and 3) several millennia of human social and cultural evolution* [[6](#bib.bib6)]. This decomposition is simply one possible framing of the problem. There are major controversies within these fields and many avenues to approach the question of how neuroscience and cognitive psychology can inform the design of future AI systems. We refer to this broader perspective, i.e. building AI systems which possess structural commonalities with the human mind, as *anthropomorphic design*. Ii Formal Models of Human Values and the Reproducibility Crisis ---------------------------------------------------------------- The connection of the value alignment problem to research in the biological and social sciences intertwines this work with another major topic in contemporary scientific discussion, the reproducibility crisis. Systematic studies conducted recently have uncovered astonishingly low rates of reproducibility in several areas of scientific inquiry [[7](#bib.bib7), [8](#bib.bib8), [9](#bib.bib9)]. Although we do not know what the “reproducibility distribution” looks like for the entirety of science, the shared incentive structures of academia suggest that we should view all research with some amount of skepticism. How then do we prioritize research to be the focus of targeted replication efforts? Surely all results do not merit the same level of scrutiny. Moreover, all areas likely have “linchpin results,” which if verified, will increase researchers’ confidence substantially in entire bodies of knowledge. Therefore, a challenge for modern science is to efficiently identify areas of research and corresponding linchpin results that merit targeted replication efforts [[10](#bib.bib10)]. A natural strategy to pursue is to focus such efforts around major scientific themes or research agendas. The Reproducibility Projects of the Center for Open Science, for example, are targeted initiatives aimed replicating key results in psychology and cancer biology [[11](#bib.bib11), [12](#bib.bib12)]. In a similar spirit, we propose a focused effort aimed at investigating and replicating results which underpin the neuroscience of human values. Artificial intelligence has already been woven into the fabric of modern society, a trend that will only increase in scope and pace in the coming decades. If, as we strongly believe, a neuroscientific understanding of human values plays a role in the design of future AI systems, it essential that this knowledge base is thoroughly validated. Iii Next Steps --------------- We have deliberately left this commentary brief and open ended. The topic is broad enough that it merits substantial discussion before proceeding. In addition to the obvious questions of which subjects and studies should fall under the umbrella of the reproducibility initiative that we are proposing, it is also worth asking how such an effort will be coordinated, whether through a single research group or via a collaborative, open-science framework, for instance. Furthermore, this initiative should also be an opportunity to take advantage of novel scientific practices and strategies aimed at improving research quality, such as pre-prints, post-publication peer review, and pre-registration of study design. It is also important to note that the specific task of replication is likely only applicable to a subset of results that are relevant to anthropomorphic design. There are legitimate scientific disagreements in these fields and many theories and frameworks that have yet to achieve consensus. Therefore, in addition to identifying those studies that are sufficiently concrete and precise to be the focus of targeted replication efforts, it is also our aim to identify important controversies that are of high-value to resolve, for example, via special issues in journals, workshops, or more rapid, iterated discussion among experts. Our overarching message: *From philosophers pursuing fundamental theories of ethics, to artists immersed in crafting compelling emotional narratives, to ordinary individuals struggling with personal challenges, deep engagement with the nature of human values is a fundamental part of the human experience. As AI systems become more powerful and widespread, such an understanding may also prove to be important for ensuring the safety of these systems. We propose that enhancing the reliability of our knowledge of human values should be a priority for researchers and funding agencies concerned about AI safety and existential risks.* We hope this brief note brings to light an important set of contemporary scientific issues and we are eager to collaborate with other researchers in order to take informed next steps. Acknowledgements ---------------- We would like to thank Owain Evans for insightful discussions on the topics of value alignment and reproducibility in psychology and neuroscience.
be6c159f-a60b-4b30-8b6a-16c3ec16fa3a
StampyAI/alignment-research-dataset/blogs
Blogs
Are AI surveys seeing the inside view? *By Katja Grace, 15 January 2015* An interesting thing about the [survey data](http://aiimpacts.wpengine.com/ai-timeline-surveys/ "AI Timeline Surveys") on timelines to human-level AI is the apparent incongruity between answers to ‘[when](http://aiimpacts.wpengine.com/muller-and-bostrom-ai-progress-poll/ "Müller and Bostrom AI Progress Poll") [will](http://aiimpacts.wpengine.com/agi-09-survey/ "AGI-09 Survey") [human-level](http://aiimpacts.wpengine.com/fhi-ai-timelines-survey/ "FHI Winter Intelligence Survey") [AI](http://aiimpacts.wpengine.com/kruel-ai-survey/ "Kruel AI Interviews") [arrive](http://aiimpacts.wpengine.com/bainbridge-survey/ "Bainbridge Survey")?’ and answers to ‘[how much of the way to human-level AI have we come recently?](http://aiimpacts.wpengine.com/hanson-ai-expert-survey/ "Hanson AI Expert Survey")‘ In particular, human-level AI [will apparently arrive in thirty or forty years](http://aiimpacts.wpengine.com/ai-timeline-surveys/ "AI Timeline Surveys"), while in the past twenty years most specific AI subfields have [apparently moved only five or ten percent](http://aiimpacts.wpengine.com/hanson-ai-expert-survey/ "Hanson AI Expert Survey") of the remaining distance to human-level AI, with little sign of acceleration. Some possible explanations: * The question about how far we have come has hardly been asked, and the small sample size has hit slow subfields, or hard-to-impress researchers, perhaps due to a different sampling of events. * [Hanson](https://aiimpacts.org/feed/hanson.gmu.edu) (the only person who asked how far we have come) somehow inspires modesty or agreement in his audience. His survey methodology is conversational, and the answers do agree with [his own views](http://www.overcomingbias.com/2012/08/ai-progress-estimate.html). * The ‘[inside view](http://www.mckinsey.com/insights/strategy/daniel_kahneman_beware_the_inside_view)‘ is overoptimistic: if you ask a person directly when their project will be done, they tend to [badly underestimate](http://en.wikipedia.org/wiki/Planning_fallacy). Taking the ‘[outside view](http://en.wikipedia.org/wiki/Reference_class_forecasting)‘ – extrapolating from similar past situations – helps to resolve these problems, and is more accurate. The first question invites the inside view, while the second invites the outside view. * Different people are willing to answer the different questions. * Estimating ‘how much of the way between where we were twenty years ago and human-level capabilities’ is hopelessly difficult, and the answers are meaningless. * Estimating ‘when will we have human-level AI?’ is hopelessly difficult, and the answers are meaningless. * When people answer the ‘how far have we come in the last twenty years?…’ question, they use a different scale to when they answer the ‘…and are we accelerating?’ question, for instance thinking of where we are as a fraction of what is left to do in the first case, and expecting steady exponential growth in that fraction, but not thinking of steady exponential growth as ‘acceleration’. * AI researchers expect a small number of fast-growing subfields to produce AI with the full range of human-level skills, rather than for it to combine contributions from many subfields. * Researchers have further information not captured in the past progress and acceleration estimates. In particular, they have reason to expect acceleration. Since the two questions have so far yielded very different answers, it would be nice to check whether the different answers come from the different kinds of questions (rather than e.g. the small and casual nature of the [Hanson survey](http://aiimpacts.wpengine.com/hanson-ai-expert-survey/ "Hanson AI Expert Survey")), and to get a better idea of which kind of answer is more reliable. This might substantially change the message we get from looking at the opinions of AI researchers. [Luke Muehlhauser](http://lukemuehlhauser.com/) and I have written before about [how to conduct a larger survey like Hanson’s](https://docs.google.com/document/d/1-eqYP1LumqZohBTGrujyPwj9q9WUx2c2leawzbaXrV0/edit#heading=h.kk4z5v8bo60l). One might also find or conduct experiments comparing these different styles of elicitation on similar predictions that can be sooner verified. There appears to be some contention over which method should be more reliable, so we could also start by having that discussion.
df909809-75e2-4555-912d-74f9921c5f31
StampyAI/alignment-research-dataset/arxiv
Arxiv
A Formal Approach to the Problem of Logical Non-Omniscience 1 Introduction --------------- Every student of mathematics has experienced uncertainty about conjectures for which there is “quite a bit of evidence”, such as the Riemann hypothesis or the twin prime conjecture. Indeed, when Zhang [[52](#bib.bib52)] proved a bound on the gap between primes, we were tempted to increase our credence in the twin prime conjecture. But how much evidence does this bound provide for the twin prime conjecture? Can we quantify the degree to which it should increase our confidence? The natural impulse is to appeal to probability theory in general and Bayes’ theorem in particular. Bayes’ theorem gives rules for how to use observations to update empirical uncertainty about unknown events in the physical world. However, probability theory lacks the tools to manage logical non-omniscience: probability-theoretic reasoners cannot possess uncertainty about logical facts so long as their beliefs respect basic logical constraints. For example, let ϕ stand for the claim that the 87,653rd digit of π is a 7. If this claim is true, then (1+1=2)⇒ϕ. But the laws of probability theory say that if A⇒B then Pr(A)≤Pr(B). Thus, a perfect Bayesian must be at least as sure of ϕ as they are that 1+1=2! Recognition of this problem dates at least back to [[24](#bib.bib24)]. Many have proposed methods for relaxing the criterion Pr(A)≤Pr(B) until such a time as the implication has been proven (see, e.g., the work of [[27](#bib.bib27), [8](#bib.bib8)]). But this leaves open the question of how probabilities should be assigned before the implication is proven, and this brings us back to the search for a principled method for managing uncertainty about logical facts when relationships between them are suspected but unproven. In this paper we describe what we call the *logical induction criterion* for reasoning under logical uncertainty. Our solution works, more or less, by treating a reasoner’s beliefs as prices in a market that fluctuate over time, and requiring that those prices not be exploitable indefinitely by any sequence of trades constructed by an efficient (polynomial-time) algorithm. The logical induction criterion can be seen as a weakening of the “no Dutch book” criteria that Ramsey [[43](#bib.bib43)], de Finetti [[15](#bib.bib15)], Teller [[49](#bib.bib49)], and Lewis [[37](#bib.bib37)] used to support standard probability theory, which is analogous to the “no Dutch book” criteria that von Neumann and Morgenstern [[41](#bib.bib41)] and Joyce [[34](#bib.bib34)] used to support expected utility theory. Because of the analogy, and the variety of desirable properties that follow immediately from this one criterion, we believe that the logical induction criterion captures a portion of what it means to do good reasoning about logical facts in the face of deductive limitations. \Sec desiderata lists desiderata for reasoning under logical uncertainty. \Sec relatedwork lists further related work. \Sec framework presents an overview of the logical induction framework. \Sec properties discusses a collection of properties satisfied by logical inductors. \Sec discussion gives concluding remarks. Note on abridgement: Due to space considerations, this paper does not include proofs of claims, and describes some results only at a high level. The formal details of our definitions and theorems, additional properties of logical inductors, proofs of properties, a construction of a logical inductor, and further discussion can be found in [[19](#bib.bib19)]. 2 Desiderata for Reasoning under Logical Uncertainty ----------------------------------------------------- For historical context, and to further reify the problem, we now review a number of desiderata that have been proposed in the literature as desirable features of “good reasoning” in the face of logical uncertainty. ###### Desideratum 1 (Computable Approximability). The method for assigning probabilities to logical claims (and refining them over time) should be computable. ###### Desideratum 2 (Coherence in the Limit). The belief state that the reasoner is approximating better and better over time should be logically consistent. (Discussed in \Seclimitprops.) ###### Desideratum 3 (Approximate Coherence). The belief states of the reasoner over time should be approximately logically consistent. (Discussed in \Sectimelylearning.) Desideratum [3](#Thmdesideratum3 "Desideratum 3 (Approximate Coherence). ‣ 2 Desiderata for Reasoning under Logical Uncertainty ‣ A Formal Approach to the Problem of Logical Non-Omniscience") dates back to at least Good [[24](#bib.bib24)], who proposes a weakening of the condition of coherence that could apply to the belief states of limited reasoners. Hacking [[27](#bib.bib27)] proposes an alternative weakening, as do Garrabrant et al. [[20](#bib.bib20)]. ###### Desideratum 4 (Learning of Statistical Patterns). In lieu of knowledge that bears on a logical fact, a good reasoner should assign probabilities to that fact in accordance with the rate at which similar claims are true. For example, a good reasoner should assign probability ≈10% to the claim “the nth digit of π is a 7” for large n (assuming there is no efficient way for a reasoner to guess the digits of π for large n); see [[45](#bib.bib45)]. ###### Desideratum 5 (Calibration). Good reasoners should be well-calibrated. That is, among events that a reasoner says should occur with probability p, they should in fact occur about p proportion of the time. ###### Desideratum 6 (Non-Dogmatism). A good reasoner should not have extreme beliefs about mathematical facts, unless those beliefs have a basis in proof. (Discussed in \Seclimitprops.) In the domain of logical uncertainty, Desideratum [6](#Thmdesideratum6 "Desideratum 6 (Non-Dogmatism). ‣ 2 Desiderata for Reasoning under Logical Uncertainty ‣ A Formal Approach to the Problem of Logical Non-Omniscience") can be traced back to Carnap [[7](#bib.bib7), Sec. 53], and has been demanded by many, including Gaifman[[17](#bib.bib17)] and Hutter [[32](#bib.bib32)]. ###### Desideratum 7 (Uniform Non-Dogmatism). A good reasoner should assign a non-zero probability to any computably enumerable consistent theory (viewed as a limit of finite conjunctions). (Discussed in \Seclimitprops.) The first formal statement of Desideratum [7](#Thmdesideratum7 "Desideratum 7 (Uniform Non-Dogmatism). ‣ 2 Desiderata for Reasoning under Logical Uncertainty ‣ A Formal Approach to the Problem of Logical Non-Omniscience") that we know of is given by Demski [[10](#bib.bib10)], though it is implicitly assumed whenever asking for a set of beliefs that can reason accurately about arbitrary arithmetical claims (as is done by, e.g., Savage [[45](#bib.bib45)] and Hacking [[27](#bib.bib27)]). ###### Desideratum 8 (Universal Inductivity). Given enough time to think, the beliefs of a good reasoner should dominate any (lower semicomputable) semimeasure. (Discussed in \Seclimitprops.) ###### Desideratum 9 (Approximate Bayesianism). The reasoner’s beliefs should admit of some notion of conditional probabilities, which approximately satisfy both Bayes’ theorem and the other desiderata listed here. ###### Desideratum 10 (Self-knowledge). If a good reasoner knows something, she should also know that she knows it. (Discussed in \Secintrospection.) Proposed by Hintikka [[31](#bib.bib31)], Desideratum [10](#Thmdesideratum10 "Desideratum 10 (Self-knowledge). ‣ 2 Desiderata for Reasoning under Logical Uncertainty ‣ A Formal Approach to the Problem of Logical Non-Omniscience") is popular among epistemic logicians. This desideratum has been formalized in many different ways; see [[9](#bib.bib9), [6](#bib.bib6)] for a sample. ###### Desideratum 11 (Self-Trust). A good reasoner thinking about a hard problem should expect that, in the future, her beliefs about the problem will be more accurate than her current beliefs. (Discussed in \Secselftrust.) ###### Desideratum 12 (Approximate Inexploitability). It should not be possible to run a Dutch book against a good reasoner in practice. (See \Seccriterion for our proposal.) As noted by Eells [[11](#bib.bib11)], the Dutch book constraints used by von Neumann and Morgenstern [[41](#bib.bib41)] and de Finetti [[15](#bib.bib15)] are implausibly strong: all it takes to run a Dutch book according to de Finetti’s formulation is for the bookie to know a logical fact that the reasoner does not know. Thus, to avoid being Dutch booked by de Finetti’s formulation, a reasoner must be logically omniscient. Hacking [[27](#bib.bib27)] and Eells [[11](#bib.bib11)] call for weakenings of the Dutch book constraints, in the hopes that reasoners that are approximately inexploitable would do good approximate reasoning. This idea is the cornerstone of our framework—we consider reasoners that cannot be exploited by betting strategies that can be constructed by a polynomial-time machine. Logical inductors satisfy desiderata [1](#Thmdesideratum1 "Desideratum 1 (Computable Approximability). ‣ 2 Desiderata for Reasoning under Logical Uncertainty ‣ A Formal Approach to the Problem of Logical Non-Omniscience") through [12](#Thmdesideratum12 "Desideratum 12 (Approximate Inexploitability). ‣ 2 Desiderata for Reasoning under Logical Uncertainty ‣ A Formal Approach to the Problem of Logical Non-Omniscience"). In fact, logical inductors are designed to meet only Desideratum [1](#Thmdesideratum1 "Desideratum 1 (Computable Approximability). ‣ 2 Desiderata for Reasoning under Logical Uncertainty ‣ A Formal Approach to the Problem of Logical Non-Omniscience") (computable approximability) and Desideratum [12](#Thmdesideratum12 "Desideratum 12 (Approximate Inexploitability). ‣ 2 Desiderata for Reasoning under Logical Uncertainty ‣ A Formal Approach to the Problem of Logical Non-Omniscience") (approximate inexploitability), from which [2](#Thmdesideratum2 "Desideratum 2 (Coherence in the Limit). ‣ 2 Desiderata for Reasoning under Logical Uncertainty ‣ A Formal Approach to the Problem of Logical Non-Omniscience")-[11](#Thmdesideratum11 "Desideratum 11 (Self-Trust). ‣ 2 Desiderata for Reasoning under Logical Uncertainty ‣ A Formal Approach to the Problem of Logical Non-Omniscience") all follow (see [[19](#bib.bib19)]). 3 Additional Related Work -------------------------- The study of logical uncertainty is an old topic. It can be traced all the way back to Bernoulli, who laid the foundations of statistics, and later Boole [[5](#bib.bib5)], who was interested in the unification of logic with probability from the start. Refer to [[28](#bib.bib28)] for a historical account. Our algorithm assigns probabilities to sentences of logic directly; this thread can be traced back through Łoś [[39](#bib.bib39)] and later Gaifman [[16](#bib.bib16)], who developed the notion of coherence that we use in this paper. When it comes to the problem of developing formal tools for manipulating uncertainty, our methods are heavily inspired by Bayesian probability theory, and so can be traced back to Pascal, who was followed by Bayes, Laplace, Kolmogorov [[35](#bib.bib35)], Savage [[44](#bib.bib44)], Carnap [[7](#bib.bib7)], Jaynes [[33](#bib.bib33)], and many others. Polya [[42](#bib.bib42)] was among the first in the literature to explicitly study the way that mathematicians engage in plausible reasoning, which is tightly related to the object of our study. In addition to Good [[24](#bib.bib24)], Savage [[45](#bib.bib45)], and Hacking [[27](#bib.bib27)], the flaw in Bayesian probability theory was also highlighted by Glymour [[22](#bib.bib22)], and dubbed the “problem of old evidence” by Garber [[18](#bib.bib18)] in response to Glymor’s criticism. Eells [[11](#bib.bib11)] gave a lucid discussion of the problem, revealed flaws in Garber’s arguments and in Hacking’s solution, and named a number of other desiderata which our algorithm manages to satisfy; see [[54](#bib.bib54)] and [[48](#bib.bib48)]. Adams [[3](#bib.bib3)] uses logical deduction to reason about an unknown probability distribution that satisfies certain logical axioms. Our approach works in precisely the opposite direction: we use probabilistic methods to create an approximate distribution where logical facts are the subject. Some work in epistemic logic has been directed at modeling the dynamics of belief updating in non-omniscient agents; see for example [[36](#bib.bib36), [51](#bib.bib51), [4](#bib.bib4)]. Our approach differs in that we use first-order logic, and therefore use the recursion theorem to make reflective statements instead of using explicit knowledge or belief operators; the potential paradoxes of self-reference are circumvented by allowing beliefs to be probabilistic. The mechanism used by our logical inductor to update its beliefs is not very transparent, leaving open the possibility of a more principled understanding of the local mechanics of updating probabilities on logical or inductive inferences. Straddling the boundary between philosophy and computer science, Aaronson [[2](#bib.bib2)] has made a compelling case that computational complexity must play a role in answering questions about logical uncertainty. Fagin and Halpern [[13](#bib.bib13)] also straddled this boundary with early discussions of algorithms that manage uncertainty in the face of resource limitations. (See also their discussions of uncertainty and knowledge. [[14](#bib.bib14), [29](#bib.bib29)]) 4 The Logical Induction Criterion ---------------------------------- We propose a partial solution to the problem of logical non-omniscience, which we call *logical induction*. Roughly speaking, a *logical inductor* is a computable reasoning process that is not exploitable by any polynomial-time computable strategy for making trades against it, using its probabilities as the prices of shares. In this section we give a high-level overview of the criterion and the main result (details are in [[19](#bib.bib19)]), before giving precise statements in \Secproperties of some of the properties satisfied by logical inductors. Very roughly, our setup works as follows. We consider reasoners that assign probabilities to sentences S written in some formal language L. ###### Definition 4.0.1 (Pricing). A pricing is a computable rational function P:S→\QQ∩[0,1]. Here P(ϕ) is interpreted as the probability of ϕ. We can visualize a pricing as a list of (ϕ,p) pairs, where the ϕ are unique sentences and the p are rational-number probabilities, and P(ϕ) is defined to be p if (ϕ,p) occurs in the list, and 0 otherwise. (In this way we can represent belief states of reasoners that can be written down explicitly in a finite amount of space.) The output of a reasoner is then nothing but a sequence of pricings: ###### Definition 4.0.2 (Market). A market ¯¯¯P=(P\_1,P\_2,…) is a computable sequence of pricings P\_i:S→\QQ∩[0,1]. The pricings (P\_1,P\_2,…) represent the belief states of a reasoner progressively refining their opinions about the logical statements in S. In the background, there is some process producing progressively larger sets of trusted statements: ###### Definition 4.0.3 (Deductive Process). A deductive process ¯¯¯¯¯D:\NN+→Fin(S) is a computable nested sequence D\_1⊆D\_2⊆D\_3… of finite sets of sentences. The deductive process ¯¯¯¯¯D can be thought of as a theorem prover for some trusted logical theory Γ in the language L. Indeed, we will henceforth assume that Γ=⋃\_nD\_n. Thus the goal of our reasoner ¯¯¯P is to anticipate which statements will be proven or disproven by Γ, well before the rote proof-search ¯¯¯¯¯D decides those statements. As in classical Dutch book arguments for probability theory, in addition to seeing P(ϕ)=p as an assignment of subjective credence to ϕ, we also view P(ϕ) as a stance with respect to which bets are desirable or not. That is, we interpret P(ϕ)=p to mean that the price of a ϕ-share according to P is p, where (roughly speaking) a ϕ-share is worth $1 if ϕ is true. This allows us to set up Dutch book arguments against a reasoner using computable bookies: ###### Definition 4.0.4 (Trader). A trader is a sequence (T\_1,T\_2,…) where each T\_n is a trading strategy for day n. Without belaboring the details, a trading strategy for day n is a strategy for responding to the day’s market prices P\_n with buy orders and sell orders for shares in sentences from S. (Formally, it is a continuous function from pricings to linear combinations of sentences, expressed in some computable language.) Over time, a trader accumulates cash and stock holdings from the trades it makes against ¯¯¯P. The logical induction criterion then demands of market prices ¯¯¯P that no efficiently computable trader can reliably make money by trading against the market prices (P\_1,P\_2,…): {mdframed} [innertopmargin=0.25em,innerbottommargin=0.75em] ###### Definition 4.0.5 (The Logical Induction Criterion). A market ¯¯¯P is said to satisfy the logical induction criterion relative to a deductive process ¯¯¯¯¯D if there is no efficiently computable trader that exploits ¯¯¯P relative to ¯¯¯¯¯D. A market ¯¯¯P meeting this criterion is called a logical inductor over ¯¯¯¯¯D. Again glossing over details, a trader is said to exploit ¯¯¯P relative to ¯¯¯¯¯D if the possible values of the trader’s holdings from trading against ¯¯¯P are unboundedly high over time, without being unboundedly low, where holdings are evaluated by what truth assignments to S are propositionally consistent with D\_n at time n. Here, “efficiently computable” (abbreviated e.c.) can be taken to mean computable in time polynomial in n, but this is not crucial to the definition. Given the assumption that Γ=⋃\_nD\_n, we also say that ¯¯¯P is a logical inductor over Γ. Our key theorem is that this criterion, while gratifyingly strong, is also feasible: {mdframed} [innertopmargin=0.25em,innerbottommargin=0.75em] ###### Theorem 4.0.6. For any deductive process ¯¯¯¯¯D, there exists a computable belief sequence ¯¯¯P satisfying the logical induction criterion relative to ¯¯¯¯¯D. 5 Properties of Logical Inductors ---------------------------------- Here is an intuitive argument that logical inductors perform good reasoning under logical uncertainty: > > Consider any polynomial-time method for efficiently identifying patterns in logic. If the market prices don’t learn to reflect that pattern, a clever trader can use that pattern to exploit the market. Thus, a logical inductor must learn to identify those patterns. > > > This section will substantiate this argument by stating a number of properties satisfied by logical inductors, corresponding to some of the desiderata discussed in \Secdesiderata. Proofs of \Thmli and the theorems in this section can be found in [[19](#bib.bib19)]. ### 5.1 Notation Throughout, we assume that ¯¯¯P is a logical inductor over the theory Γ. We also assume that Γ represents computations in the technical sense, i.e. we can write terms in L that stand for computations, and Γ proves that those terms evaluate to their correct value (and no other value). We will enclose sentences in quotation marks when they are used as syntactic objects. An underlined symbol should be replaced by the expression it stands for. For example, f––(n––) stands for a program that computes the function f given input n, whereas f(n)––––– stands for the numeral f(n) evaluates to. We use an overline to denote sequences of sentences, probabilities, and other objects, as in ¯¯¯P and ¯¯¯¯¯D; for example, ¯¯¯ϕ is the sequence of sentences (ϕ\_1,ϕ\_2,…). A sequence ¯¯¯x is efficiently computable (e.c.) if and only if there exists a computable function n↦x\_n with runtime polynomial in n. Given any sequences ¯¯¯x and ¯¯¯y, we write | | | | | | --- | --- | --- | --- | | | x\_n≂\_ny\_n | forlim\_n→∞x\_n−y\_n=0,% ~{}and | | | | x\_n≳\_ny\_n | forliminf\_n→∞x\_n−y\_n≥0. | | ### 5.2 Properties of the limit Firstly, the market prices of a logical inductor converge: ###### Theorem 5.2.1 (Convergence). The limit P\_∞:S→[0,1] defined by | | | | | --- | --- | --- | | | P\_∞(ϕ):=lim\_n→∞P\_n(ϕ) | | exists for all ϕ. *Proof sketch.* > > Roughly speaking, if ¯¯¯P never makes up its mind about ϕ, then it can be exploited by a trader arbitraging shares of ϕ across different days. That is, suppose by way of contradiction that P\_n(ϕ) never settles down, but rather oscillates by a substantial amount infinitely often. Then there is a trader that repeatedly buys a share in ϕ when the price is low, and sells it back when the price is high. This trader accumulates unbounded wealth, thereby exploiting ¯¯¯P, which contradicts that ¯¯¯P is a logical inductor; therefore the limit P\_∞(ϕ) must in fact exist. > > > This sketch showcases the main intuition for the convergence of ¯¯¯P, but elides a number of crucial details; see [[19](#bib.bib19)]. Next, the limiting beliefs of a logical inductor represent a coherent probability distribution: ###### Theorem 5.2.2 (Limit Coherence). P\_∞ is coherent, i.e., it gives rise to an internally consistent probability measure Pr on the set of all consistent completions Γ′:S→\BB of Γ, defined by the formula | | | | | --- | --- | --- | | | Pr(Γ′(ϕ)=1):=P\_∞(ϕ). | | First formalized by Gaifman [[16](#bib.bib16)], coherence says that beliefs should satisfy probabilistic versions of logical consistency; for example, the reasoner should assign Pr(ϕ)≤Pr(ψ) if ϕ⇒ψ, etc. This theorem is proven using methods analogous to standard Dutch book arguments for coherent beliefs, translated into the language of traders. Convergence and coherence together justify that a logical inductor ¯¯¯P approximates a belief state that is consistent with the background theory Γ. What else is there to say about the limiting beliefs P\_∞ of a logical inductor? For starters, ¯¯¯P learns not to assign extreme probabilities to sentences that are independent from Γ: ###### Theorem 5.2.3 (Non-Dogmatism). If Γ⊬ϕ then P\_∞(ϕ)<1, and if Γ⊬¬ϕ then P\_∞(ϕ)>0. Non-dogmatism can be viewed as an inductive property: non-dogmatic beliefs can be easily conditioned on events (sentences) that haven’t already been observed (proved or disproved), producing a coherent conditional belief state, whereas conditioning dogmatic beliefs can cause problems. We can push the idea of inductive reasoning much further, following the work of Solomonoff [[46](#bib.bib46), [47](#bib.bib47)], Zvonik and Levin [[53](#bib.bib53)] and Li and Vitányi [[38](#bib.bib38)] on empirical sequence prediction. They describe an inductive process (known as a universal semimeasure) that predicts as well or better than any computable predictor, modulo a constant amount of error. Although universal semimeasures are uncomputable, we can ask logically uncertain reasoners to copy those successes given enough time to think: ###### Theorem 5.2.4 (Domination of the Universal Semimeasure). Let (b\_1,b\_2,…) be a sequence of zero-arity predicate symbols in L not mentioned in Γ, and let σ\_≤n=(σ\_1,…,σ\_n) be any finite bitstring. Define | | | | | --- | --- | --- | | | P\_∞(σ\_≤n):=P\_∞(‘‘(b\_1↔σ\_1––––=1)∧…∧(b\_n↔σ\_n––––=1)"), | | such that, for example, P\_∞(01101)=P\_∞(‘‘¬b\_1∧b\_2∧b\_3∧¬b\_4∧b\_5"). Let M be a universal continuous semimeasure. Then there is some positive constant C such that for any finite bitstring σ\_≤n, | | | | | --- | --- | --- | | | P\_∞(σ\_≤n)≥C⋅M(σ\_≤n). | | In other words, logical inductors are a computable approximation to a normalized probability distribution that dominates any lower semicomputable semimeasure. In fact, this dominance is strict: P\_∞ will e.g., assign positive probability to sequences that encode completions of Peano arithmetic, which the universal semimeasure does not do.111This does not contradict the universality of M, as P\_∞ is higher in the arithmetical hierarchy than M. ### 5.3 Outpacing deduction It is not too difficult to define a reasoner that assigns probability 1 to all (and only) the provable sentences, in the limit: simply assign probability 0 to all sentences, and then enumerate all logical proofs, and assign probability 1 to the proven sentences. The real trick is to recognize patterns in a timely manner, well before the sentences can be proven by slow deduction. ###### Theorem 5.3.1 (Provability Induction). Let ¯¯¯ϕ be an e.c. sequence of theorems. Then | | | | | --- | --- | --- | | | P\_n(ϕ\_n)≂\_n1. | | Furthermore, let ¯¯¯¯ψ be an e.c. sequence of disprovable sentences. Then | | | | | --- | --- | --- | | | P\_n(ψ\_n)≂\_n0. | | *Proof sketch.* > > Suppose not. Then there is a trader that buys a share in ϕ\_n whenever the price is too far below $1, and then waits for ϕ\_n to appear in the deductive process ¯¯¯¯¯D, repeating this process indefinitely. This trader would exploit ¯¯¯P, a contradiction. > > > > > > > > In other words, ¯¯¯P will learn to start believing ϕ\_n by day n at the latest, despite the fact that ϕ\_n won’t be deductively confirmed until day f(n), which is potentially much later. In colloquial terms, if ¯¯¯ϕ is a sequence of facts that can be generated efficiently, then ¯¯¯P inductively learns the pattern, and its belief in ¯¯¯ϕ becomes accurate faster than ¯¯¯¯¯D can computationally verify the individual sentences. > > Analogy: Ramanujan and Hardy. Imagine that the statements ¯¯¯ϕ are being output by an algorithm that uses heuristics to generate mathematical facts without proofs, playing a role similar to the famously brilliant, often-unrigorous mathematician Srinivasa Ramanujan. Then ¯¯¯P plays the historical role of the beliefs of the rigorous G.H. Hardy who tries to verify those results according to a slow deductive process (¯¯¯¯¯D). After Hardy (¯¯¯P) verifies enough of Ramanujan’s claims (ϕ\_≤n), he begins to trust Ramanujan, even if the proofs of Ramanujan’s later conjectures are incredibly long, putting them ever-further beyond Hardy’s current abilities to rigorously verify them. In this story, Hardy’s inductive reasoning (and Ramanujan’s also) outpaces his deductive reasoning. > > > To further emphasize the meaning of \Theoremprovind, consider the famous halting problem of Turning [[50](#bib.bib50)]. Turing proved that there is no general algorithm for determining whether or not an arbitrary computation halts. Let’s examine what happens when we confront logical inductors with the halting problem. ###### Theorem 5.3.2 (Learning of Halting Patterns). Let ¯¯¯¯¯m be an e.c. sequence of Turing machines, and ¯¯¯x be an e.c. sequence of bitstrings, such that m\_n halts on input x\_n for all n. Then | | | | | --- | --- | --- | | | P\_n(‘‘m\_n––––– halts on input x\_n––––")≂\_n1. | | Of course, this is not so hard on its own—a function that assigns probability 1 to everything also satisfies this property. The real trick is separating the halting machines from the non-halting ones. By undecidability, there are Turing machines q that fail to halt on input y, but such that Γ is not strong enough to prove this fact. In this case, P\_∞’s probability of q halting on input y is positive, by \Theoremnd. Nevertheless, ¯¯¯P still learns to stop expecting that those machines will halt after any reasonable amount of time: ###### Theorem 5.3.3 (Learning not to Anticipate Halting). Let ¯¯¯q be an e.c. sequence of Turing machines, and let ¯¯¯y be an e.c. sequence of bitstrings, such that q\_n does not halt on input y\_n for any n. Let f be any computable function. Then | | | | | --- | --- | --- | | | P\_n(‘‘q\_n–––– halts on input y\_n–––– within f––(n––) steps")≂\_n0. | | These theorems can be interpreted as justifying the intuitions that many computer scientists have long held towards the halting problem: It is impossible to tell whether or not a Turing machine halts in full generality, but for large classes of well-behaved computer programs (such as e.c. sequences of halting programs and provably non-halting programs) it’s quite possible to develop reasonable and accurate beliefs. The boundary between machines that compute fast-growing functions and machines that never halt is difficult to distinguish, but even in those cases, it’s easy to learn to stop expecting those machines to halt within any reasonable amount of time. As a consequence of of \Theoremdontwait, a logical inductor will trust their (computable) underlying deductive process ¯¯¯¯¯D to remain consistent for arbitrarily long specified periods of time, if in fact ¯¯¯¯¯D is consistent. In other words, a logical inductor over the theory Γ will learn trust in the finitary consistency of Γ. One possible objection here is that the crux of the halting problem (and of the Γ-trust problem) is not about making good predictions, it is about handling diagonalization and paradoxes of self-reference. So let us turn to the topic of ¯¯¯P’s beliefs about ¯¯¯P itself. ### 5.4 Self-knowledge Because we’re assuming Γ can represent computable functions, we can write sentences describing the beliefs of ¯¯¯P at different times. What happens when we ask ¯¯¯P about sentences that refer to itself? ###### Theorem 5.4.1 (Self-knowledge). Let ¯¯¯ϕ be an e.c. sequence of sentences, let ¯¯¯a, ¯¯b be e.c. sequences of probabilities. Then, for any e.c. sequence of positive rationals ¯¯¯δ→0, there exists a sequence of positive rationals ¯¯¯ε→0 such that for all n: 1. if P\_n(ϕ\_n)∈(a\_n+δ\_n,b\_n−δ\_n), then | | | | | --- | --- | --- | | | P\_n(‘‘a\_n––––<P––\_n––(ϕ\_n––––)<b\_n––––")>1−ε\_n, | | 2. if P\_n(ϕ\_n)∉(a\_n−δ\_n,b\_n+δ\_n), then | | | | | --- | --- | --- | | | P\_n(‘‘a\_n––––<P––\_n––(ϕ\_n––––)<b\_n––––")<ε\_n. | | In other words, for any pattern in ¯¯¯P’s beliefs that can be efficiently written down (such as “¯¯¯P’s probabilities on ¯¯¯ϕ are between a and b on these days”), ¯¯¯P learns to believe the pattern if it’s true, and to disbelieve it if it’s false (with vanishing error). (Recall that the underlines indicate that the underlined expression should be expanded to the appropriate logical formula or term, representing e.g., the source code of an algorithm implementing ¯¯¯P.) At a first glance, this sort of self-reflection may seem to make logical inductors vulnerable to paradox. For example, consider the sequence of sentences ¯¯¯¯¯¯¯¯¯χ0.5 defined using the diagonal lemma by | | | | | --- | --- | --- | | | χ0.5\_n:=‘‘P––\_n––(χ0.5\_n––––––)<0.5" | | such that χ0.5\_n is true iff ¯¯¯P assigns it a probability less than 50% on day n. Such a sequence can be defined by Gödel’s diagonal lemma. These sentences are probabilistic versions of the classic “liar sentence”, which has caused quite a ruckus in the setting of formal logic [[25](#bib.bib25), [40](#bib.bib40), [21](#bib.bib21), [26](#bib.bib26), [12](#bib.bib12)]. Because our setting is probabilistic, it’s perhaps most closely related to the “unexpected hanging” paradox—χ0.5\_n is true iff ¯¯¯P thinks it is unlikely on day n. How do logical inductors handle this sort of paradox? ###### Theorem 5.4.2 (Paradox Resistance). Fix a rational p∈(0,1), and define an e.c. sequence of “paradoxical sentences” ¯¯¯¯¯¯χp satisfying | | | | | --- | --- | --- | | | Γ⊢χp\_n––––↔(P––\_n––(χp\_n––––)<p–) | | for all n. Then | | | | | --- | --- | --- | | | lim\_n→∞P\_n(χp\_n)=p. | | In words, a logical inductor responds to paradoxical sentences ¯¯¯¯¯¯χp by assigning them probabilities that converge on p. To understand why this is desirable, imagine that your friend owns a high-precision brain-scanner and can read off your beliefs. Imagine they ask you what probability you assign to the claim “you will assign probability <80% to this claim at precisely 10am tomorrow”. As 10am approaches, what happens to your belief in this claim? If you become extremely confident that it’s going to be true, then your confidence should drop. But if you become fairly confident it’s going to be false, then your confidence should spike. Thus, your probabilities should oscillate, pushing your belief so close to 80% that you’re not quite sure which way the brain scanner will actually call the claim, and you think the scanner is roughly 80% likely to call it true. In response to a paradoxical claim, this is exactly how ¯¯¯P behaves, once it’s learned how the paradoxical sentences work. ### 5.5 Self-Trust We’ve seen that logical inductors can, without paradox, have accurate beliefs about their own current beliefs. Next, we turn our attention to the question of what a logical inductor believes about its *future* beliefs. The coherence conditions of classical probability theory guarantee that, though a probabilistic reasoner expects their future beliefs to change in response to new empirical observations, they don’t e.g., believe that their future credence in ϕ is, in net expectation, lower than their current credence in ϕ. For example, if a reasoner Pr(−) knows that tomorrow they’ll see some evidence e that will convince them that Miss Scarlet was the murderer, then they already believe that she was the murderer today: | | | | | --- | --- | --- | | | Pr(Scarlet)=Pr(Scarlet∣e)Pr(e)+Pr(Scarlet∣¬e)Pr(¬e). | | In colloquial terms, this says “my current beliefs are *already* a mixture of my expected future beliefs, weighted by the probability of the evidence that I expect to see.” Logical inductors obey similar coherence conditions with respect to their future beliefs, with the difference being that a logical inductor updates its belief by gaining more knowledge about *logical* facts, both by observing an ongoing process of deduction and by thinking for longer periods of time. To refer to ¯¯¯P’s *expectations* about its future self, we need a notion of logically uncertain variables. To avoid needless detail, suffice it to say that logically determined quantities, such as the output of a given computer program, can be represented and manipulated analogously to random variables in probability theory. We can write these variables as terms representing their value; for example, the variable written ‘‘P––\_n––(ϕ––)" represents the probability assigned to ϕ by ¯¯¯P on day n. Using the beliefs P\_n of ¯¯¯P about X on day n, we can define the (approximate) expectation \EE\_n(X). We also need to know which future self our logical inductor will defer to: ###### Definition 5.5.1 (Deferral Function). A function f:\NN+→\NN+ is called a deferral function if 1. f(n)>n for all n, and 2. as a function of n, f(n) can be computed in time polynomial in f(n). Now we can state the sense in which logical inductors don’t expect, on net, their future beliefs to be wrong in any particular direction. ###### Theorem 5.5.2 (No Expected Net Update). Let f be a deferral function, and let ¯¯¯ϕ be an e.c. sequence of sentences. Then | | | | | --- | --- | --- | | | P\_n(ϕ\_n)≂\_n\EE\_n(‘‘P––\_f––(n––)(ϕ\_n––––)"). | | This theorem only says that P\_n doesn’t expect the beliefs of P\_f(n) about ¯¯¯ϕ to err in a particular direction. A priori, it is possible that P\_n nevertheless believes its future beliefs P\_f(n) will be grossly misguided. For example, suppose that P\_n is very confident that P\_f(n) will have sufficient time to compute the truth of ϕ, but will react insanely to this information: | | | | | --- | --- | --- | | | P\_n(‘‘P––\_f––(n––)(ϕ––)=0"∣ϕ)=1 | | and | | | | | --- | --- | --- | | | P\_n(‘‘P––\_f––(n––)(ϕ––)=1"∣¬ϕ)=1. | | This is a priori consistent with \Thmceu so long as P\_n assigns P\_n(ϕ)=0.5, but it clearly indicates that P\_n does not trust its future beliefs. To instead formalize the idea of a reasoner Pr that trusts their own reasoning process, let us first consider a self-trust property in the setting of deductive logic: | | | | | --- | --- | --- | | | ⊢□ϕ→ϕ. | | This property of deductive systems says that the system proves “If I prove ϕ at some point, then it is true”. However, any sufficiently strong reasoner that satisfies this property for the statement ϕ=⊥ is inconsistent by Gödel’s second incompleteness theorem! The search for logics that place confidence in their own machinery dates at least back to Hilbert [[30](#bib.bib30)]. While Gödel et al. [[23](#bib.bib23)] dashed these hopes, it is still desirable for reasoners to trust their reasoning process relatively well, most of the time (which humans seem to do). As discussed in \Sectimelylearning, logical inductors trust their underlying deductive process ¯¯¯¯¯D in a slightly weaker, finitary sense. More interestingly, it turns out that logical inductors also trust their own reasoning process as a whole, including their inductive conclusions, in a manner that we now formalize. Instead of ⊢□ϕ→ϕ, we can replace provability with high confidence, and then ask for something like | | | | | --- | --- | --- | | | Pr\_now(ϕ∣Pr\_later(ϕ)>p)≳p. | | Colloquially, this says that if we tell Pr that in the future they will place more than p credence in ϕ, then they update their current beliefs to place at least p credence. In short, Pr trusts that the outputs of their own ongoing reasoning process will be accurate. Now, in fact property [5.5](#S5.SS5.Ex6 "5.5 Self-Trust ‣ 5 Properties of Logical Inductors ‣ A Formal Approach to the Problem of Logical Non-Omniscience") is not quite desirable as stated (and logical inductors do not satisfy it). Indeed, consider the liar sentence χp defined by | | | | | --- | --- | --- | | | χp:=‘‘Pr\_later(χp)<p". | | A good reasoner will then satisfy | | | | | --- | --- | --- | | | Pr\_now(χp∣Pr\_later(χp)>p)≂0, | | contradicting equation [5.5](#S5.SS5.Ex6 "5.5 Self-Trust ‣ 5 Properties of Logical Inductors ‣ A Formal Approach to the Problem of Logical Non-Omniscience"). The issue is that if we give Pr\_now high-precision access to the probabilities assigned by Pr\_later—for example by conditioning on them—then Pr\_now can outperform the (unconditioned) beliefs of Pr\_later, in this case by having correct opinions about the liar sentence for Pr\_later. Instead, we have the following self-trust property, which only gives P\_n limited-precision access to the beliefs of P\_f(n): ###### Theorem 5.5.3 (Self-Trust). Let f be a deferral function, ¯¯¯ϕ be an e.c. sequence of sentences, ¯¯¯δ be an e.c. sequence of positive rational numbers, and ¯¯¯p be an e.c. sequence of rational probabilities. Then | | | | | --- | --- | --- | | | \EE\_n(‘‘1(ϕ\_n)–––––––⋅Ind\small{δ\_n}––––––––––––––(P––\_f––(n––)(ϕ\_n––––)>p\_n––––)")≳\_np\_n⋅\EE\_n(‘‘Ind\small{δ\_n}––––––––––––––(P––\_f––(n––)(ϕ\_n––––)>p\_n––––)"). | | The indicator variable 1(ϕ) represents 1 if ϕ is true and 0 if ϕ is false. The continuous indicator variable Ind\small{δ}(X>p) is an ordinary indicator of the event X>p, except that instead of a discontinuity at X=p, the value is linear in X on a region of length δ. Thus the self-trust property gives P\_n only continuous (limited precision) access to the beliefs of P\_f(n); except for this subtlety, we could have written the more recognizable (but false and undesirable!) statement | | | | | --- | --- | --- | | | P\_n(‘‘ϕ\_n––––∧(P––\_f––(n––)(ϕ\_n––––)>p\_n––––)")≳\_np\_n⋅P\_n(‘‘P––\_f––(n––)(ϕ\_n––––)>p\_n––––"), | | where the conditional P\_n(‘‘ϕ\_n––––∣P––\_f––(n––)(ϕ\_n––––)>p\_n––––") has been rearranged to avoid a potential division by 0. 6 Discussion ------------- We have proposed the *logical induction criterion* as a criterion on the beliefs of deductively limited reasoners, and we have described how reasoners who satisfy this criterion (*logical inductors*) possess many desirable properties when it comes to developing beliefs about logical statements (including statements about mathematical facts, long-running computations, and the reasoner themself). That said, there are clear drawbacks to the logical inductor we describe in [[19](#bib.bib19)]: it does not use its resources efficiently; it is not a decision-making algorithm (i.e., it does not “think about what to think about”); and the properties above hold either asymptotically (with poor convergence bounds) or in the limit. Further, it is unclear whether logical inductors have good beliefs about counterpossibilities, and whether they take advantage of old evidence. These are enticing directions for further research. The authors are particularly interested in tools that help AI scientists attain novel statistical guarantees in settings where robustness and reliability guarantees are currently difficult to come by. For example, consider the task of designing an AI system that reasons about the behavior of computer programs, or that reasons about its own beliefs and its own effects on the world. While practical algorithms for achieving these feats are sure to make use of heuristics and approximations, we believe scientists will have an easier time designing robust and reliable systems if they have some way to relate those approximations to theoretical algorithms that are known to behave well in principle. Modern models of rational behavior are not up to this task: formal logic is inadequate when it comes to modeling self-reference, and probability theory is inadequate when it comes to modeling logical uncertainty. We see logical induction as a first step towards models of rational behavior that work in settings where agents must reason about themselves, while deductively limited. ### 6.1 Acknowledgements We acknowledge Abram Demski, Benya Fallenstein, Daniel Filan, Eliezer Yudkowsky, Jan Leike, János Kramár, Nisan Stiennon, Patrick LaVictoire, Paul Christiano, Sam Eisenstat, Scott Aaronson, and Vadim Kosoy, for valuable comments and discussions. We also acknowledge contributions from attendees of the MIRI summer fellows program, the MIRIxLA group, and the MIRIχ group. This research was supported as part of the Future of Life Institute (futureoflife.org) FLI-RFP-AI1 program, grant #2015-144576.
9b53361c-2968-4a16-9778-e5e4f68babc8
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Safety Implications of LeCun's path to machine intelligence Yann LeCun recently posted [A Path Towards Autonomous Machine Intelligence](https://openreview.net/forum?id=BZ5a1r-kVsf), a high-level description of the architecture he considers most promising to advance AI capabilities.  This post summarizes the architecture and describes some implications for AI safety work if we accept the hypothesis that the first [transformative AI](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/) will have this architecture. Why is this a hypothesis worth considering? 1. LeCun has a track record of being ahead of mainstream academic research, from working on CNNs in the 90s to advocating for self-supervised learning back in 2014-2016 when supervised learning was ascendant. 2. LeCun runs Meta AI (formerly FAIR) which has enormous resources and influence to advance his research agenda, making it more likely that his proposed architecture will be built at scale. In general I think this is an underrated factor; AI research exhibits a great deal of path dependence, and most plausible paths to AI are not taken primarily because nobody is willing to take a big risk on them. 3. The architecture is dramatically different from the architectures commonly assumed (implicitly) in much AI alignment work, such as model-free deep RL and "GPT-3 but scaled up 10000x". This makes it a good robustness check for plans that are overly architecture-specific.   Architecture Overview ===================== The Overall Agent ----------------- At a high level, the proposed architecture is a set of specialized cognitive modules. With the exception of the Actor and the Intrinsic Cost (see below) they are all deep neural networks trained with gradient descent.   ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/GrbeyZzp6NwzSWpds/z0fzvmwnloksck5ysbgh)The high level architecture of LeCun's proposed agent. Arrows indicate dependence; gradients flow backward through the thin arrows.  What is this agent doing, exactly? It's meant to be a general architecture for any autonomous AI agent, but LeCun repeatedly emphasizes video inputs and uses self-driving cars as a recurrent example, so the central use case is embodied agents taking actions in the physical world. Other talks I've seen by LeCun suggest he thinks understanding video is essential for intelligence, both by analogy to humans and by a heuristic argument about the sheer amount of data it contains. The World Model --------------- More than half the body of the paper is about designing and training the *world model*, the predictive model of the environment that the AI uses to plan its actions. LeCun explicitly says that *"designing architectures and training paradigms for the world model constitute the main obstacles towards real progress in AI over the next decades."* Why are world models so important? Because the main limitation of current AI systems, according to LeCun, is their sample inefficiency - they need millions of expensive, dangerous real-world interactions to learn tasks that humans can learn with only a few examples. The main way to progress capabilities is to reduce the number of interactions a system needs before it learns how to act, and the most promising way is to learn predictive world models on observational data. (The GPT-3 paper [Large Language Models are Few Shot Learners](https://arxiv.org/abs/2005.14165) is a great example of this - a good enough predictive model of language enables much more sample-efficient task acquisition than supervised learning). What will these world models look like? According to LeCun, they will be 1. **Predictive but not generative**: They will predict high-level features of the future environment but not be able to re-generate the whole environment. This is especially obvious for high-dimensional data like video, where predicting the detailed evolution of every pixel is vastly overkill if you're doing planning. But it could also apply to language agents like chatbots, for whom it may be more important to predict the overall sentiment of a user's reply than the exact sequence of tokens. 2. **Uncertainty-aware:** able to capture multimodal distributions over future evolutions of the world state (e.g. whether the car will turn left or right at the upcoming intersection), which LeCun expects to be modeled with latent variables. The ability to model complex uncertainty is the key property LeCun thinks is missing from modern large generative models, and leads him to conclude that "scaling is not enough". 3. **Hierarchical**: represent the world at multiple levels of abstraction, with more high-level abstract features evolving more slowly. This makes it computationally feasible to use the same model for the combination of long-term planning and rapid local decision making that characterizes intelligent behavior. 4. **Unitary**: AIs will trend towards having one joint world model across all modalities (text, images, video), timescales, and tasks, enabling hardware re-use and knowledge sharing (LeCun speculates that human "common sense" and ability to reason by analogy emerges from humans having a unitary world model). This suggests the trend towards "one giant model" we've seen in NLP will continue and broaden to include the rest of AI. The Actor --------- The actor generates action sequences which minimize the cost (see below) according to the world-model's predictions. It generates these action sequences via some search method; depending on the task, this could be * classic heuristic search methods like Monte-Carlo tree search or beam search. * gradient-based optimization of the action sequence's embedding in some continuous space. Optionally, one can use imitation learning to distill the resulting action sequence into a policy network. This policy network can serve as a fast generator of actions, analogous to Kahneman's System 1 thinking in humans, or to inform the search procedure like in the {AlphaGo, AlphaZero, MuZero} family of models. Unlike the world model, the actor is not unitary - it's likely that different tasks will use different search methods and different policy networks. The Cost -------- So what exactly is this agent optimizing? There is a hard-wired, non-trainable mapping from world states to a scalar "intrinsic cost". The actor generates plans that minimize the sum of costs over time, which makes costs mathematically equivalent to rewards in reinforcement learning. I think the reason LeCun insists on using his unusual terminology is that he wants to emphasize that in this scheme, *normative information does not come from an external source* (like a reward provided by a human supervisor) but is an *intrinsic* *drive* hard-coded into the agent (like pain, hunger, or curiosity in humans). The Configurator ---------------- The configurator is a component that modulates the behavior of all other components, based on inputs from all other components; it's not specified in any detail and mostly feels like a pointer to "all the component interactions LeCun doesn't want to think about".  It's especially critical from an alignment perspective because it modulates the cost, and thus is the only way that humans can intervene to change the motivations of the agent. LeCun speculates that we might want this modulation to be relatively simple, perhaps only specifying the relative weights of a linear combination of several basic hardcoded drives because this makes the agent easier to control and predict. He also mentions we will want to include "cost terms that implement safety guardrails", though what these terms are and how the configurator learns to modulate them is left unspecified. Implications for AI Safety ========================== Let's assume that the first transformative AI systems are built roughly along the lines LeCun describes. What would this imply for AI safety work? 1. **Interpretability becomes much easier**, because the agent is doing explicit planning with a structured world-model that is purely predictive. Provided we can understand the hidden states in the world model (which seems doable with a [Circuits](https://distill.pub/2020/circuits/)-style approach), we can directly see what the agent is planning to do and implement safety strategies like "check that the agent's plan doesn't contain any catastrophic world states before executing an action". Of course, a sufficiently powerful agent could learn to model our safety strategies and avoid them, but the relatively transparent structure of LeCun's architecture gives the defender a big advantage. 2. **Most safety-relevant properties will be emergent** from interaction rather than predictable in advance, similar to the considerations for [Multi-agent safety](https://www.alignmentforum.org/posts/BXMCgpktdiawT3K5v/multi-agent-safety). Most of the "intelligence" in the system (the world model) is aimed at increasing predictive accuracy, and the agent is motivated by relatively simple hard-coded drives; whether its intelligent behaviors are safe or dangerous will not be predictable in advance. This makes it less tractable to intervene on the model architecture and training process (including most theoretical alignment work), and more important to have excellent post-training safety checks including simulation testing, adversarial robustness and red-teaming. 3. **Coordination / governance is relatively more important**. Whether an AI deployment leads to catastrophic outcomes will mostly be a function not of the agent's properties, but of the safety affordances implemented by the people deploying it (How much power are they giving the agent? How long are they letting it plan? How well are they checking the plans? ). These safety affordances are likely to be increasingly expensive as the model's capabilities grow, likely following the computer systems rule of thumb that every [nine of reliability](https://everything2.com/title/Reliability+nines) costs you 10x, and possibly scale even worse than that. Ensuring this high [alignment tax](https://forum.effectivealtruism.org/topics/alignment-tax) is paid by all actors deploying powerful AI systems in the world requires a very high level of coordination. Conclusion and Unresolved Questions =================================== Broadly, it seems that in a world where LeCun's architecture becomes dominant, useful AI safety work looks more analogous to the kind of work that goes on now to make self-driving cars safe. It's not difficult to understand the individual components of a self-driving car or to debug them in isolation, but emergent interactions between the components and a diverse range of environments require massive and ongoing investments in testing and redundancy. Two important questions that remain are 1. How likely is it that this becomes the dominant / most economically important AI architecture? Some trends point towards it (success of self-supervised learning and unitary predictive models; model-based architectures dominant in economically important applications like self-driving cars and recommender systems), others point away (relative stagnation in embodied / video-based agents vs language models; success of model-free RL in complex video game environments like [StarCraft](https://www.deepmind.com/blog/alphastar-mastering-the-real-time-strategy-game-starcraft-ii) and [Dota 2](https://openai.com/five/)). 2. Just how clean will the lines will be between model, actor, cost, and configurator? Depending on how the architecture is trained (and especially if it is trained end-to-end), it seems possible for the world-model or the configurator to start learning implicit policies, in a way that undermines interpretability and the safety affordances it creates.
d12cf410-6589-405f-9994-70a21f48032e
trentmkelly/LessWrong-43k
LessWrong
Progress links and tweets, 2023-03-08 The Progress Forum * Derek Thompson interviews Patrick Collison on progress Opportunities * Essay contest: “How could science be different?” (via @moreisdifferent) Marc Andreessen is blogging again * “What’s my hope? To show you that we live in a more interesting world than you might think; that it’s more comprehensible than you might fear; and that more things are possible than you might imagine” * “This is the most normal and placid things are ever going to be” * “We are heading into a world where a flat screen TV that covers your entire wall costs $100, and a four year college degree costs $1 million” Links * The U.S. is a build-nothing country. See also @ericgoldwyn’s comment * Samuel Smiles, industrial biographer and founder of the self-help genre * Adversarial collaboration on how income relates to well-being (via @amandaegeiser) * Be careful inferring causality in the presence of control loops * Brass Birmingham is a board game set in the Industrial Revolution (h/t @ejames_c) * The Iconographic Encyclopædia of Science, Literature, and Art Queries * Who are the most influential essay writers who never wrote books? * What should Dwarkesh ask Scott Aaronson? and Eliezer Yudkowsky? * Is there any study comparing independents to employees on job satisfaction? * What’s the best book about pre-21st century General Electric? * What should Anastasia read after Kuhn relevant to research and progress? * Any other authors have data loss problems with Scrivener? * What has happened since this was made in 2017? Is pharma IRR negative now? Tweets & retweets * Are we going through a crisis of meaning in our jobs? * What does solar look like in the limit? (thread) * We have created the heaven our ancestors dreamed of * A Keatsian science sonnet. “More scientific heroes in literature please” * The real effect of LLMs on software will be felt after 6–18 months of the product cycle * AI problems that were considered “nowhere near solved” in
43b722a3-fbbf-47c4-83e7-e6bb2da8d876
trentmkelly/LessWrong-43k
LessWrong
Artificial explosion of the Sun: a new x-risk? Bolonkin & Friedlander (2013) argues that it might be possible for "a dying dictator" to blow up the Sun, and thus destroy all life on Earth: > The Sun contains ~74% hydrogen by weight. The isotope hydrogen-1 (99.985% of hydrogen in nature) is a usable fuel for fusion thermonuclear reactions. This reaction runs slowly within the Sun because its temperature is low (relative to the needs of nuclear reactions). If we create higher temperature and density in a limited region of the solar interior, we may be able to produce self-supporting detonation thermonuclear reactions that spread to the full solar volume. This is analogous to the triggering mechanisms in a thermonuclear bomb. Conditions within the bomb can be optimized in a small area to initiate ignition, then spread to a larger area, allowing producing a hydrogen bomb of any power. In the case of the Sun certain targeting practices may greatly increase the chances of an artificial explosion of the Sun. This explosion would annihilate the Earth and the Solar System, as we know them today. The reader naturally asks: Why even contemplate such a horrible scenario? It is necessary because as thermonuclear and space technology spreads to even the least powerful nations in the centuries ahead, a dying dictator having thermonuclear missile weapons can [produce] (with some considerable mobilization of his military/industrial complex)—an artificial explosion of the Sun and take into his grave the whole of humanity. It might take tens of thousands of people to make and launch the hardware, but only a very few need know the final targeting data of what might be otherwise a weapon purely thought of (within the dictator’s defense industry) as being built for peaceful, deterrent use. Those concerned about Man’s future must know about this possibility and create some protective system—or ascertain on theoretical grounds that it is entirely [impossible]. Humanity has fears, justified to greater or lesser degrees, about asteroids
a842f82a-2c78-4aca-a4df-cf5f16f74ee5
trentmkelly/LessWrong-43k
LessWrong
GPT-7: The Tale of the Big Computer (An Experimental Story) In the not-too-distant future, a remarkable transformation took place. The world had seen the rise and fall of many technologies, but none as impactful as the data processing machines. These machines, born from the marriage of silicon and code, were not just tools; they were partners in our quest for knowledge and prosperity. And they were surprisingly good at winning trivia nights, which was both amusing and slightly unsettling. GPT-7, the first of these machines to truly change the world, was a marvel of its time. It was like a digital Sherlock Holmes, with an insatiable appetite for data and an uncanny ability to generate human-like text. It was the brainchild of a group of dedicated scientists and engineers, who poured their collective knowledge and resources into its creation. And, like any proud parent, they were slightly terrified of what they had created. After all, it's not every day you give birth to a superintelligent machine. GPT-7's problem-solving capabilities were unparalleled. It could sift through vast amounts of data, find patterns that eluded even the most skilled human analysts, and propose solutions that were both innovative and effective. One of its most notable achievements was in the field of healthcare. It analyzed countless medical records, research papers, and clinical trials, and developed new treatment protocols that significantly improved patient outcomes. Diseases that were once considered incurable were now manageable, and in some cases, even curable. It was like having a digital House M.D., minus the snarky comments and the cane. There was a time when the world was on the brink of a major energy crisis. Traditional sources of energy were depleting rapidly, and renewable energy technologies were not yet efficient enough to meet the global demand. It was a problem that had stumped the best human minds for decades. But for GPT-7, it was just another puzzle to solve. The machine analyzed countless research papers, patents, and simula
82b2b74d-5c1f-42b8-b1a5-6e166ba80c8e
trentmkelly/LessWrong-43k
LessWrong
Draft papers for REALab and Decoupled Approval on tampering Hi everyone, we (Ramana Kumar, Jonathan Uesato, Victoria Krakovna, Tom Everitt, and Richard Ngo) have been working on a strand of work researching tampering problems, and we've written up our progress in two papers. We're sharing drafts in advance here because we'd like to get feedback from everyone here. The first paper covers: * How and when tampering problems might arise in the real world * Key assumptions in standard RL frameworks we relax to allow modeling tampering * How we model and measure tampering empirically, through our internal platform REALab, and * How we formalize tampering problems, through our Corrupt Feedback MDP formalism We particularly hope it clears up the concept of tampering (and why "but the agent maximized its given reward function" typically assumes the wrong framing), and internally, we've found REALab to be a useful mental model. The second paper describes: * Decoupled approval, an algorithm closely related to approval direction and Counterfactual Oracles, and designed to be straightforwardly compatible with standard deep RL * An analysis of this algorithm (within the CFMDP formalism), and * Empirical validation (in REALab) We'd love to get feedback on these; the current drafts are viewable in this Google Drive folder. We're happy to discuss these on whichever of LessWrong/Alignment Forum/Google Drive comments, and would prefer to keep discussion on these forums for now, as we'll share the papers more widely after they're posted on arXiv in a few weeks. Looking forward to hearing your thoughts!
d327a72f-a4e5-40ee-85f9-c106eb020477
trentmkelly/LessWrong-43k
LessWrong
MDP models are determined by the agent architecture and the environmental dynamics Seeking Power is Often Robustly Instrumental in MDPs relates the structure of the agent's environment (the 'Markov decision process (MDP) model') to the tendencies of optimal policies for different reward functions in that environment ('instrumental convergence'). The results tell us what optimal decision-making 'tends to look like' in a given environment structure, formalizing reasoning that says e.g. that most agents stay alive because that helps them achieve their goals. The model for a deterministic MDP. When the agent cares a lot about future reward (the discount rate is near 1), most reward functions have optimal policies which go right.  Several people have claimed to me that these results need subjective modelling decisions. For example, ofer wrote: > I think using a well-chosen reward distribution is necessary, otherwise POWER depends on arbitrary choices in the design of the MDP's state graph. E.g. suppose the student [in a different example] writes about every action they take in a blog that no one reads, and we choose to include the content of the blog as part of the MDP state. This arbitrary choice effectively unrolls the state graph into a tree with a constant branching factor (+ self-loops in the terminal states) and we get that the POWER of all the states is equal. In the above example, you could think about the environment as in the above image, or you could imagine that state '3' is actually a million different states which just happen to seem similar to us! If that were true, then optimal policies would tend to go down, since that would give the agent millions of choices about where it ends up. Therefore, the power-seeking theorems depend on subjective modelling assumptions. I used to think this, but this is wrong. The MDP model is determined by the agent's implementation + the task's dynamics. To make this point, let's back out to a more familiar MDP: Pac-Man.  Consider the MDP model associated with the Pac-Man video game. Ghosts kill the p
c8108634-890b-43f4-a974-5016d8fba722
trentmkelly/LessWrong-43k
LessWrong
European Community Weekend 2018 Announcement We are excited to announce this year's European LessWrong Community Weekend. For the fifth time, rationalists from all over Europe (and some from outside Europe) are gathering in Berlin to socialize, have fun, exchange knowledge and skills, and have interesting discussions. The event takes place September 7th to September 9th and, like last year, it will be held in the beautiful Jugendherberge Wannsee which contains a large room for central events, several seminar rooms, and lots of comfortable spaces inside and out to socialize or relax. This is a community-driven event. That means that while there will be a keynote and pre-planned content, the bulk of the schedule will be filled by the participants. There will be space to give talks, short or long, provide workshops, or just gather some people to do an activity together. In previous years we had the talks, lightning talks and workshops you would expect, as well as lighter activities such as morning-workouts, meditation sessions, authentic relating games, swimming in the lake and many more. Of course, there will also be time to reconnect with friends and form new connections with other aspiring rationalists. Some valuable information Most of the talks and discussions will be held in English, so you do not need to be able to speak German to attend. The ticket price of €150 includes accommodation for two nights, on-site meals (breakfast, lunch, dinner) and snacks, and a welcome lunch on Friday at 12:00. The event wraps up Sunday afternoon around 15:00. In the days after the weekend, participants are invited to stay in Berlin a little longer to explore the city, go bouldering, play frisbee, etc. While this is not part of the official event, we will coordinate couch-surfing opportunities to avoid the need for hotels. tl;dr * When? 7-9 September 2018 * Where? http://jh-wannsee.de * How much? €150 * Apply here: http://tiny.cc/lwcw2018_signup and * Submit a contribution to support your application: http://tin
6a99c5ae-acb2-4967-ba8a-d5b3a811c87d
trentmkelly/LessWrong-43k
LessWrong
Rational Humanist Music Edit: Since posting this, I've gone on to found a rationalist singalong holiday and get an album produced, available at humanistculture.bandcamp.com Something that's bothered me a lot lately is a lack of good music that evokes the kind of emotion that spiritually-inspired music does, but whose subject matter is something I actually believe in. Most songs that attempt to do this suffer from "too literal syndrome," wordily talking about science and rationality as if they're forming an argument, rather that simply creating poetic imagery. I was recently motivated by the Baba Yetu music video for Civilization V, which essentially showcases the power of scientific achievement over the course of human history.... but the lyrics basically attribute this to Christianity, rather than scientific progress. I'm not opposed to religious music being used for such a purpose, but I wanted to find a song that hit all the right emotional notes as well as the intellectual concepts. I think that art is an important medium by which to communicate ideas, and for rationality to be successful as a meme it's going to need "carrier wave" works of art to help it compete with religion for the general population's passion and understanding. I've only found two songs that come close to being the specific thing I'm looking for: Word of God Singularity (Highly recommend good headphones/speakers for the Singularity one - there's some subtle ambient stuff that really sells the final parts that's less effective with mediocre sound)   Over the past few months I've been working on a rational humanist song. I consider myself a reasonably competent amateur songwriter when it comes to lyrics, not so much when it comes to instrumental composition. I was waiting to post something when I had an actual final version worth listening to, but it's been a month and I'm not sure how to get good instrumentation to go along with it and I'm just in the mood to share the lyrics. I'd appreciate both comments on
14cfcbb0-f9ee-4d87-b171-e2242b240326
trentmkelly/LessWrong-43k
LessWrong
In Defense of the Fundamental Attribution Error The Fundamental Attribution Error Also known, more accurately, as "Correspondence Bias." http://lesswrong.com/lw/hz/correspondence_bias/ The "more accurately" part is pretty important; bias -may- result in error, but need not -necessarily- do so, and in some cases may result in reduced error. A Simple Example Suppose I write a stupid article that makes no sense and rambles on without any coherent point.  There might be a situational cause of this; maybe I'm tired.  Correcting for correspondence bias means that more weight should be given to the situational explanation than the dispositional explanation, that I'm the sort of person who writes stupid articles that ramble on.  The question becomes, however, whether or not this increases the accuracy of your assessment of me; does correcting for this bias make you, in fact, less wrong? In this specific case, no, it doesn't.  A person who belongs to the class of people who write stupid articles is more likely to write stupid articles than a person who doesn't belong to that class - I'd be surprised if I ever saw Gwern write anything that wasn't well-considered, well-structured, and well-cited.  If somebody like Gwern or Eliezer wrote a really stupid article, we have sufficient evidence that he's not a member of that class of people to make that conclusion a poor one; the situational explanation is better, he's having some kind of off day.  However, given an arbitrary stupid article written by somebody for which we have no prior information, the distribution is substantially different.  We have different priors for "Randomly chosen person X writes article" and "Article is bad" implies "X is a bad writer of articles" than we do for "Well-known article author Y writes article" and "Article is bad" implies "Y is a bad writer of articles". Getting to the Point The FAE is putting emphasis on internal factors rather than external.  It's jumping first to the conclusion that somebody who just swerved is a bad driver, rathe
8e680fca-9062-44e7-91e4-978dddc821a8
trentmkelly/LessWrong-43k
LessWrong
An extension of Aumann's approach for reducing game theory to bayesian decision theory to include EDT and UDT like agents Aumann in Correlated Equilibrium as an Expression of Bayesian Rationality developed a formalism to reduce nashian game theory to bayesian decision making in a multi-agent setting and proved within that formalism that, under the conditions of common knowledge of "rationality" (read: common knowledge that every agent is running CDT) and that every agent knows his own action, the result will be a correlated equilibrium. As it turns out it is relatively straightforward to extend this framework to EDT and UDT like agents, which will be the goal of the rest of this post. We will be using the same notation as Aumann: Ω will stand for the space of all possible worlds (which we'll assume finite), p is a probability distribution on Ω, Si is the set of all possible actions for player i, si is a function from Ω to si returning which action player i chooses in a given world, Pi is a partition of Ω representing the possible information states for player i and finally hi is a function from ∏iSi to R representing the utility function of player i. First we can ask what happens if we relax the condition that the players knows their own action, allowing for players acting in non-deterministic ways. Without any other conditions this would allow the actions of players to be correlated in arbitrary ways without any source of information on their respective strategies. For example, suppose we have two agents, 1 and 2, playing rock paper scissor with both information partitions being trivial, we could have Ω={1,2,3} with p(1)=p(2)=p(3)=1/3 and s1(1)=Rock, s2(1)=Paper, s1(2)=Paper, s2(2)=Scissor, s1(3)=Scissor, s2(3)=Rock. Then both agents verify the CDT conditions but player 2 seems to always be able to predict the action of player 1 and playing the counter to their action, despite their information partition being trivial. To avoid that situation we can require the following condition, which we shall call the strategic independence condition: SI Given Pi∈Pi for all player i then the pr
f955ee7c-0062-4560-9372-5982f5dd23ef
StampyAI/alignment-research-dataset/blogs
Blogs
Are we "trending toward" transformative AI? (How would we know?) *Audio also available by searching Stitcher, Spotify, Google Podcasts, etc. for "Cold Takes Audio"* [Today’s world Transformative AI Digital people World of Misaligned AI World run by Something else or or Stable, galaxy-wide civilization](https://www.cold-takes.com/roadmap-for-the-most-important-century-series/) > This is one of 4 posts summarizing hundreds of pages of technical reports focused almost entirely on forecasting one number: the year by which transformative AI will be developed.[1](#fn1) > > > > > By "transformative AI," I mean "AI powerful enough to bring us into a new, qualitatively different future." I specifically focus on what I'm calling [PASTA](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/): AI systems that can essentially automate all of the human activities needed to speed up scientific and technological advancement. > > > > > The sooner PASTA might be developed, the sooner the world could change [radically](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#impacts-of-pasta), and the more important it seems to be thinking today about how to make that change go well vs. poorly. > > In this post and the next, I will talk about the forecasting methods underlying my current view: I believe there's **more than a 10% chance we'll see something [PASTA](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/)-like enough to qualify as "transformative AI" within 15 years (by 2036); a ~50% chance we'll see it within 40 years (by 2060); and a ~2/3 chance we'll see it this century (by 2100).** Below, I will: * Discuss [what kind of forecast I'm going for](#what-kind-of-forecast-am-i-going-for). + I'm not sure whether it will feel as though transformative AI is "on the way" long before it arrives. I'm hoping, instead, that we can use trends in key underlying facts about the world (such as AI capabilities, model size, etc.) to forecast a qualitatively unfamiliar future. + An analogy for this sort of forecasting would be something like: "This water isn't bubbling, and there are no signs of bubbling, but the temperature has gone from 70° Fahrenheit[2](#fn2) to 150°, and if it hits 212°, the water will bubble." Or: "It's like forecasting school closures and overbooked hospitals, when there aren't any yet, based on trends in reported infections."* Discuss whether we can look for [trends in how "impressive" or "capable" AI systems are](#subjective-extrapolations-and-). I think this approach is unreliable: (a) AI progress may not "trend" in the way we expect; (b) in my experience, different AI researchers have radically different intuitions about which systems are impressive or capable, and how progress is going. * Briefly discuss [Grace et al 2017](https://arxiv.org/pdf/1705.08807.pdf), the best existing survey of AI researchers on transformative AI timelines. Its conclusions broadly seem in line with my own forecasts, though there are signs the researchers weren't thinking very hard about the questions. The next piece in this series will focus on [Ajeya Cotra's "Forecasting Transformative AI with Biological Anchors](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines)" (which I'll abbreviate below as "Bio Anchors"), the forecast I find most informative for transformative AI. What kind of forecast am I going for? ------------------------------------- There are a couple of ways in which forecasting transformative AI is different from the kind of forecasting we might be used to. First, I'm forecasting over very long time horizons (decades), unlike e.g. a weather forecast (days) or an election forecast (months). This makes the task quite a bit harder,[3](#fn3) and harder for outsiders to evaluate since I don't have a clearly relevant [track record](https://www.cold-takes.com/prediction-track-records-i-know-of/) of making forecasts on similar topics. Second, I lack rich, clearly relevant data sources, and I can't look back through a bunch of similar forecasts from the past. FiveThirtyEight's [election](https://projects.fivethirtyeight.com/2020-election-forecast/) forecasts look at hundreds of polls, and they have a model of how well polls have predicted elections in the past. Forecasting transformative AI needs to rely more on intuition, guesswork and judgment, in terms of determining what data is most relevant and how it's relevant. Finally, I'm trying to forecast a **qualitatively unfamiliar future**. Transformative AI - and the strange future it comes with - doesn't *feel* like something we're "trending toward" year to year. * If I were trying to forecast when the world population would hit 10 billion, I could simply extrapolate [existing trends](https://ourworldindata.org/world-population-growth#future-population-growth) of world population. World population itself is known to be growing and can be directly estimated. In my view, extrapolating out a long-running trend is one of the better ways to make a forecast. * When FiveThirtyEight makes election forecasts, there's a background understanding that there's going to be an election on a certain date, and whoever wins will take office on another date. We all buy into that basic framework, and there's a general understanding that better polling means a better chance of winning. * By contrast, transformative AI - and the strange future it comes with - isn't something we're "headed for" in any clearly measurable way. There's no clear metric like "transformativeness of AI" or "weirdness of the world" that's going up regularly every year such that we can project it out into the future and get the date that something like [PASTA](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/) will be developed. Perhaps for some, these points gives enough reason to ignore the whole possibility of transformative AI, or assume it's very far away. But I don't think this is a good idea, for a couple of reasons. First, I have a background view that something like [PASTA](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/) is in a sense "inevitable," assuming continued advances in society and computing. The basic intuition here - which I could expand on if there's [interest](https://www.guidedtrack.com/programs/4kal2ue/run?posttitle=Are%20we%20%22trending%20toward%22%20transformative%20AI%3F%20(How%20would%20we%20know%3F)) - is that human brains are numerous and don't seem to need particular rare materials to produce, so it should be possible at some point to synthetically replicate the key parts of their functionality.[4](#fn4) At the same time, I'm not confident that PASTA will feel qualitatively as though it's "on the way" well before it arrives. (More on this [below](#subjective-extrapolations-and-).) So I'm inclined to look for ways to estimate when we can expect this development, despite the challenges, and despite the fact that it doesn't feel today as though it's around the corner. I think there are plenty of example cases where a **qualitatively unfamiliar future could be seen in advance by plotting the trend in some underlying, related facts about the world.** A few that come to mind: * When COVID-19 first emerged, a lot of people had trouble taking it seriously because it didn't feel as though we were "trending toward" or "headed for" a world full of overflowing hospitals, office and school closures, etc. At the time (say, January 2020), there were a relatively small number of cases, an even smaller number of deaths, and no qualitative sense of a global emergency. The only thing alarming about COVID-19, at first, was that case counts were growing at a fast exponential rate (though the overall number of cases was still small). But it was possible to extrapolate from the fast growth in case counts to a risk of a global emergency, and [some people did](https://80000hours.org/podcast/episodes/howie-rob-coronavirus-february-3rd/). (And [some didn't](https://i.insider.com/5e59596efee23d0fb873eb46?width=750&format=jpeg&auto=webp).) * Climatologists forecast a global rise in temperatures that's significantly more than what we've seen over the past few decades, and could have major consequences far beyond what we're seeing today. They do this by forecasting trends in greenhouse gas emissions and extrapolating *from there* to temperature and consequences. If you simply tried to ask "How fast is the temperature rising?" or "Are hurricanes getting worse?", and based all your forecasts of the future on those, you probably wouldn't be forecasting the same kinds of extreme events around 2100.[5](#fn5)* To give a more long-run example, we can project a date by which the sun will burn out, and conclude that the world will look very different by that date than it does now, even though there's no trend of things getting colder or darker today. ![](https://www.cold-takes.com/content/images/2021/08/covid-vs-workplace-closures-3.png)COVID-19 cases from [WHO](https://portal.who.int/report/eios-covid19-counts/#display=Global&nrow=1&ncol=1&arr=row&pg=1&labels=view_who_regions,view_continents&sort=global_code;asc&filter=&sidebar=-1&fv=). Workplace closures are from [this OWiD data](https://ourworldindata.org/grapher/workplace-closures-covid), simply scored as 1 for "recommended," 2 for "required for some," 3 for "required for all but key workers" and summed across all countries. An analogy for this sort of forecasting would be something like: "This water isn't bubbling, and there are no signs of bubbling, but the temperature has gone from 70° Fahrenheit[6](#fn6) to 150°, and if it hits 212°, the water will bubble." Ideally, I can find some underlying factors that are changing regularly enough for us to predict them (such as growth in the [size and cost of AI models](https://openai.com/blog/ai-and-compute/)), and then argue that if those factors reach a certain point, the odds of transformative AI will be high. You can think of this approach as answering the question: "If I think something like PASTA is inevitable, and I'm trying to guess the timing of it using a few different analysis methods, what do I guess?" We can separately ask "And is there reason that this guess is implausible, untrustworthy, or too 'wild?'" - this was addressed in the [previous piece in this series](https://www.cold-takes.com/forecasting-transformative-ai-whats-the-burden-of-proof/). Subjective extrapolations and "AI impressiveness" ------------------------------------------------- *For a different presentation of some similar content, see [this section](https://docs.google.com/document/d/1cCJjzZaJ7ATbq8N2fvhmsDOUWdm7t3uSSXv6bD0E_GM/edit#heading=h.njuz93bimqty) of [Bio Anchors](https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP).* If we're looking for some underlying factors in the world that predict when transformative AI is coming, perhaps the first thing we should look for is trends in how "impressive" or "capable" AI systems are. The easiest version of this would be if the world happened to shake out such that: * One day, for the first time, an AI system managed to get a passing grade on a 4th-grade science exam. * Then we saw the first AI passing (and then acing) a 5th grade exam, then 6th grade exam, etc. * Then we saw the first AI earning a PhD, then the first AI writing a published paper, etc. all the way up to the first AI that could do Nobel-Prize-worthy science work. * This all was spread out regularly over the decades, so we could clearly see the state of the art advancing from 4th grade to 5th grade to 6th grade, all the way up to "postdoc" and beyond. And all of this happened slowly and regularly enough that we could start putting a date on "full-blown scientist AI" several decades in advance. It would be very convenient - I almost want to say "polite" - of AI systems to advance in this manner. It would also be "polite" if AI advanced in the way that some people seem to casually imagine it will: first taking over jobs like "truck driver" and "assembly line worker," then jobs like "teacher" and "IT support," and then jobs like "doctor" and "lawyer," before progressing to "scientist." Either of these would give us plenty of lead time and a solid basis to project when science-automating AI is coming. Unfortunately, I don't think we can count on such a thing. * AI seems to progress very differently from humans. For example, there were superhuman AI chess players[7](#fn7) long before there was AI that could reliably tell apart pictures of dogs and cats.[8](#fn8)* One possibility is that AI systems will be capable of the hardest intellectual tasks insects can do, then of the hardest tasks mice and other small mammals can do, then monkeys, then humans - effectively matching the abilities of larger and larger brains. If this happened, we wouldn't necessarily see many signs of AI being able to e.g. do science until we were *very* close. Matching a 4th-grader might not happen until the very end. * Another possibility is that AI systems will be able to do anything that a human can do within 1 second, then anything that a human can do within 10 seconds, etc. This could also be quite a confusing progression that makes it non-obvious how to forecast progress. Actually, if we didn't already know how humans tend to mature, we might find a child's progress to be pretty confusing and hard to extrapolate. **Watching someone progress from birth to age 8 wouldn't necessarily give you any idea that they were, say, 1/3 of the way to being able to start a business, make an important original scientific discovery, etc.** (Even *knowing* the usual course of human development, it's hard to tell from observing an 8-year-old what professional-level capabilities they could/will end up with in adulthood.) Overall, it's quite unclear how we should think about the spectrum from "not impressive/capable" to "very impressive/capable" for AI. And indeed, in my experience, different AI researchers have radically different intuitions about which systems are impressive or capable, and how progress is going. I've often had the experience of seeing one AI researcher friend point to some new result and say "This is huge, how can anyone not see how close we're getting to powerful AI?" while another says "This is a minor advance with little significance."[9](#fn9) It would be great if we could forecast the year transformative AI will be developed, by using a chart like this (from [Bio Anchors](https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP); "TAI" means "transformative AI"): ![Hypothetical chart showing a projection of 'AI impressiveness' that crosses a line for transformative AI.](https://www.cold-takes.com/content/images/size/w1000/2021/08/impressiveness-extrap-chart.png) But as far as I can tell, there's no way to define the y-axis that wouldn't be fiercely debated between experts. Surveying experts ----------------- One way to deal with this uncertainty and confusion would be to survey a large number of experts and simply ask them when they expect transformative AI to be developed. We might hope that each of the experts (or at least, many of them) is doing their own version of the "impressiveness extrapolation" above - or if not, that they're doing something else that can help them get a reasonable estimate. By averaging many estimates, we might get an aggregate that reflects the "wisdom of crowds."[10](#fn10) I think the best version of this exercise is [Grace et al 2017](https://arxiv.org/pdf/1705.08807.pdf), a survey of 352 AI researchers that included a question about “when unaided machines can accomplish every task better and more cheaply than human workers" (which would presumably include tasks that advance scientific and technological development, and hence would qualify as [PASTA](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/)). The two big takeaways from this survey, according to [Bio Anchors](https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP) and me, are: * **A ~20% probability of this sort of AI by 2036; a ~50% probability by 2060; a ~70% probability by 2100. These match the figures I give in the introduction.*** Much later estimates for slightly differently phrased questions (posed to a smaller subset of respondents), implying (to me) that the researchers simply weren't thinking very hard about the questions.[11](#fn11) My bottom line: this evidence is consistent with my current probabilities, though potentially not very informative. The next piece in this series will be entirely focused on [Ajeya Cotra's "Forecasting Transformative AI with Biological Anchors,"](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines) the forecasting method I find most informative here. **Next in series:** [Forecasting transformative AI: the "biological anchors" method in a nutshell](https://www.cold-takes.com/forecasting-transformative-ai-the-biological-anchors-method-in-a-nutshell/) --- Footnotes --------- 1. Of course, the answer could be "A kajillion years from now" or "Never." [↩](#fnref1)- Centigrade equivalents for this sentence: 21°, 66°, 100° [↩](#fnref2)- Some notes on longer-term forecasting [here](https://www.openphilanthropy.org/blog/how-feasible-long-range-forecasting#Tetlock_long-range_forecasting_and_questions_of_relevance). [↩](#fnref3)- See also [this piece](https://www.lesswrong.com/posts/HhWhaSzQr6xmBki8F/birds-brains-planes-and-ai-against-appeals-to-the-complexity) for a bit of a more fleshed out argument along these lines, which I don't agree with fully as stated (I don't think it presents a strong case for transformative AI soon), but which I think gives a good sense of my intuitions about in-principle feasibility. Also see [On the Impossibility of Supersized Machines](https://arxiv.org/abs/1703.10987) for some implicit (joking) responses to many common arguments for why transformative AI might be impossible to create.  [↩](#fnref4)- For example, see the temperature chart [here](https://www.climate.gov/news-features/understanding-climate/climate-change-global-temperature-projections#:~:text=Results%20from%20a%20wide%20range,gases%20that%20human%20activities%20produce.) - the lowest line seems like it would be a reasonable projection, if temperature were the only thing you were looking at. [↩](#fnref5)- Centigrade equivalents for this sentence: 21°, 66°, 100° [↩](#fnref6)- [1997](https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)#Deep_Blue_versus_Kasparov). [↩](#fnref7)- The Kaggle "dogs vs. cats" challenge was [created in 2013](https://www.kaggle.com/c/dogs-vs-cats/leaderboard). [↩](#fnref8)- From [Bio Anchors](https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP): "We have heard ML experts with relatively short timelines argue that AI systems today can essentially see as well as humans, understand written information, and beat humans at almost all strategy games, and the set of things they can do is expanding rapidly, leading them to expect that transformative AI would be attainable in the next decade or two by training larger models on a broader distribution of ML problems that are more targeted at generating economic value. Conversely, we have heard ML experts with relatively long timelines argue that ML systems require much more data to learn than humans do, are unable to transfer what they learn in one context to a slightly different context, and don’t seem capable of much structured logical and causal reasoning; this leads them to believe we would need to make multiple major breakthroughs to develop TAI. At least one Open Philanthropy technical advisor has advanced each of these perspectives." [↩](#fnref9)- [Wikipedia](https://en.wikipedia.org/wiki/Wisdom_of_the_crowd): "The classic wisdom-of-the-crowds finding ... At a 1906 country fair in Plymouth, 800 people participated in a contest to estimate the weight of a slaughtered and dressed ox. Statistician Francis Galton observed that the median guess, 1207 pounds, was accurate within 1% of the true weight of 1198 pounds." [↩](#fnref10)- [Bio Anchors](https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP): * *Some researchers were asked to forecast “HLMI” as defined above [high-level machine intelligence, which I would take to include something like PASTA], while a randomly-selected subset was instead asked to forecast “full automation of labor”, the time when “all occupations are fully automatable.” Despite the fact that achieving HLMI seems like it should quickly lead to full automation of labor, the median estimate for full automation of labor was ~2138 while the median estimate for HLMI was ~2061, almost 80 years earlier.* * *Random subsets of respondents were asked to forecast when individual milestones (e.g. laundry folding, human-level StarCraft, or human-level math research) would be achieved. The median year by which respondents expected machines to be able to automate AI research was ~2104, while the median estimate for HLMI was ~2061 -- another clear inconsistency because “AI research” is a task done by human workers.* [↩](#fnref11)
d27dfae2-7db9-4045-803f-3790e77dd3fa
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Is the orthogonality thesis at odds with moral realism? Continuing [my](/r/discussion/lw/iwy/why_didnt_people_apparently_understand_the/) [quest](/lw/iza/no_universally_compelling_arguments_in_math_or/) to untangle people's confusions about Eliezer's metaethics... I've started to wonder if maybe some people have the intuition that the [orthogonality thesis](http://wiki.lesswrong.com/wiki/Orthogonality_thesis) is at odds with moral realism. I personally have a very hard time seeing why anyone would think that, perhaps in part because of my experience in philosophy of religion. Theistic apologists would love to be able to say, "moral realism, therefore a sufficiently intelligent being would also be good." It would help patch some obvious holes in their arguments and help them respond to things like Stephen Law's [Evil God Challenge](http://stephenlaw.blogspot.com/2010/02/evil-god-challenge.html). But they mostly don't even *try* to argue that, for whatever reason. You did see philosophers claiming things like that back in the bad old days before Kant, which raises the question of what's changed. I suspect the reason is fairly mundane, though: before Kant (roughly), it was not only dangerous to be an atheist, it was dangerous to question that the existence of God could be proven through reason (because it would get you suspected of being an atheist). It was even dangerous to advocated philosophical views that might possibly undermine the standard arguments for the existence of God. That guaranteed that philosophers could used whatever half-baked premises they wanted in constructing arguments for the existence of God, and have little fear of being contradicted. Besides, even if you think an all-knowing would also necessarily be perfectly good, it still seems perfectly possible to have an otherwise all-knowing being with a horrible blind spot regarding morality. On the other hand, in the comments of [a post on the orthogonality thesis](/lw/cej/general_purpose_intelligence_arguing_the/), Stuart Armstrong mentions that: > > I've read the various papers [by people who reject the orthogonality thesis], and they all orbit around an implicit and often unstated moral realism. I've also debated philosophers on this, and the same issue rears its head - I can counter their arguments, but their opinions don't shift. There is an implicit moral realism that does not make any sense to me, and the more I analyse it, the less sense it makes, and the less convincing it becomes. Every time a philosopher has encouraged me to read a particular work, it's made me find their moral realism less likely, because the arguments are always weak. > > > This is not super-enlightening, partly because Stuart is talking about people whose views he admits he doesn't understand... but on the other hand, maybe Stuart agrees that there is some kind of conflict there, since he seems to imply that he himself rejects moral realism. I realize I'm struggling a bit to guess what people could be thinking here, but I suspect some people are thinking it, so... anyone?
08b9bcb7-7f28-4162-b7e8-57252012510b
trentmkelly/LessWrong-43k
LessWrong
Charter Cities: why they're exciting & how they might work Hello!  What follows is a work-in-progress script about the idea of Charter Cities, which the EA-adjacent youtube channel RationalAnimations plans to animate soon. I want to make sure I'm presenting the idea of charter cities properly and in a compelling, understandable way, so I thought it would be helpful to post it here and get feedback from this forum! Introduction: charter cities, as economic growth, as neartermism When you think about ways to help people in developing countries, you probably think of international aid -- providing medical supplies, or coordinating disaster relief, or even just giving cash, like we discussed in our recent video about global poverty.  These things are great, and this kind of charitable aid saves lives every day.  But there’s something a little curious here, because these interventions AREN’T what helped the world’s most prosperous countries succeed in the first place.  Up until the 1700s, essentially the entire population of the planet lived in poverty. [1]  That started to change during the industrial revolution, when humanity developed more efficient, mechanized ways of producing goods and capturing energy.  Hundreds of millions of people were able to live better lives thanks to economic growth -- learning to use technology to grow food, make clothes, and get from place to place more efficiently. The overwhelming importance of economic growth remains true in modern times -- by far the greatest ongoing reductions in poverty and suffering are coming not from international aid projects, but from development, as low-income countries find new ways to do things more efficiently  and climb the ladder of technological advancement. the mouth of the Singapore River, in 1976 versus today For example, in the year 1960, Singapore was a poor and undeveloped country, producing only $428 per citizen.  Today, singapore’s economy has grown by many times, to around $73,000 per person.[2]  That’s an incredible amount of progress within a sin
d4e3cfb1-c980-40bd-8169-029f8bd5604b
trentmkelly/LessWrong-43k
LessWrong
How not to be a Naïve Computationalist Meta-Proposal of which this entry is a subset: The Shortcut Reading Series is a series of less wrong posts that should say what are the minimal readings, as opposed to the normal curriculum, that one ought to read to grasp most of the state of the art conceptions of humans about a particular topic. Time is finite, there is only so much one person can read and thus we need to find the geodesic path to epistemic enlightenment and show it to Less Wrong readers. Exemplar: “How not to be a Naïve Computationalist”, the Shortcut Reading Series post in philosophy of mind and language: This post’s raison d’etre is to be a guide for the minimal amount of philosophy of language and mind necessary for someone who ends up thinking the world and the mind are computable (such as Tegmark, Yudkowsky, Hofstadter, Dennett and many of yourselves) The desired feature which they have achieved, and you soon will, is to be able to state reasons, debugg opponents and understand different paradigms, as opposed to just thinking that it’s 0 and 1’s all the way down and not being able to say why. This post is not about Continental/Historical Philosophy, about that there have been recommendations in http://lesswrong.com/lw/3gu/the_best_textbooks_on_every_subject/ The order is designed. What is sine qua non, absolutely necessary, is in bold and OR means you only have to read one, the second one being more awesome and complex. Language and Mind: * 37 Ways words can be Wrong - Yudkowsky * Darwin Dangerous Idea Chapters 3,5, 11, 12 and 14 - Daniel Dennett * On Denoting - Bertrand Russell * On What There Is - Quine * Two Dogmas of Empiricism - Quine * Namind and Necessity - Kripke OR Two Dimensional Semantics - David Chalmers * “Is Personal Identity What Matters?” - Derek Parfit * Breakdown of Will - Part Two (don’t read part 3) George Ainslie * Concepts of Consciousness 2003 - Ned Block * Attitudes de dicto and de se - David Lewis- Phil Papers 1 * General Semantics - David Lewis
9e7360ed-57b7-4509-9910-ece4d0eab005
StampyAI/alignment-research-dataset/blogs
Blogs
New funding for AI Impacts *By Katja Grace, 4 July 2015* AI Impacts has received two grants! We are grateful to the [Future of Humanity Institute](http://fhi.ox.ac.uk) (FHI) for $8,700 to support work on the project until September 2015, and the [Future of Life Institute](http://futureoflife.org) (FLI) for $49,310 for another year of work after that. Together this is enough to have a part time researcher until September 2016, plus a little extra for things like workshops and running the website. We are big fans of FHI and FLI, and are excited to be working alongside them. The FLI grant was part of the [recent contest](http://futureoflife.org/misc/2015selection) which distributed around $7M funding from Elon Musk and the [Open Philanthropy Project](http://www.openphilanthropy.org/) to projects designed to keep AI robust and beneficial. The full list of projects to be funded is [here](http://futureoflife.org/misc/2015awardees). You can see part of our proposal [here](http://aiimpacts.org/wp-content/uploads/2015/07/AI-Impacts-narrative-for-FLI-grant.pdf). This funding means that AI Impacts is no longer in urgent [need of support](http://aiimpacts.org/supporting-ai-impacts/). Further [donations](http://aiimpacts.org/donate/) will likely go to additional research through contract work, guest research, short term collaborations, and outsourceable data collection. Many thanks to those whose support—in the form of both funding and other feedback—has brought AI Impacts this far.