Daffy

Machine Intelligence Research Institute Inc

Machine Intelligence Research Institute Inc

Berkeley, CA 94704
Tax ID58-2565917

Want to make a donation using Daffy?

Lower your income taxes with a charitable deduction this year when you donate to this non-profit via Daffy.

Payment method

Frequency

Amount

$USD
Daffy covers all ACH transaction fees so 100% of your donation goes to your favorite charities.

Do you work for Machine Intelligence Research Institute Inc? Learn more here.

About this organization

Revenue

$5,708,399

Expenses

$3,660,550

Mission

MIRI exists to ensure that the creation of smarter-than-human intelligence has a positive impact.

About

The Machine Intelligence Research Institute is a research nonprofit studying the mathematical underpinnings of intelligent behavior. Our mission is to develop formal tools for the clean design and analysis of general-purpose AI systems, with the intent of making such systems safer and more reliable when they are developed. The field of AI has a reputation for overselling its progress. In the “AI winters” of the late 1970s and 1980s, researchers’ failures to make good on ambitious promises led to a collapse of funding and interest in AI. Although the field is now undergoing a renaissance, overconfidence is still a major fear; discussion of the possibility of human-equivalent general intelligence is still largely relegated to the science fiction shelf. At the same time, researchers largely agree that AI is likely to begin outperforming humans on most cognitive tasks in this century. Given how disruptive domain-general AI could be, we think it is prudent to begin a conversation about this now, and to investigate whether there are limited areas in which we can predict and shape this technology’s societal impact. Researchers at MIRI tend to be relatively agnostic about how the state of the art in AI will change over the coming decades, and how many years off smarter-than human AI systems are. However, we think some qualitative predictions are possible: — As perception, inference, and planning algorithms improve, AI systems will be trusted with increasingly complex and long-term decision-making. Small errors will then have larger consequences. — Realistic goals and environments for general reasoning systems will be too complex for programmers to directly specify. AI systems will instead need to inductively learn correct goals and environmental models. — Systems that end up with poor models of their environment can do significant harm. However, poor models limit how well a planning system can control its environment, which limits the expected harm. — There are fewer obvious constraints on the harm a system with poorly specified goals might do. In particular, an autonomous system that learns about human goals, but is not correctly designed to align its own goals to its best model of human goals, could cause catastrophic harm in the absence of adequate checks. — AI systems’ goals or world-models may be brittle, exhibiting exceptionally good behavior until some seemingly irrelevant environmental variable changes. This is again a larger concern for incorrect goals than for incorrect belief and inference, because incorrect goals don’t limit the capability of an otherwise high-intelligence system. Stuart Russell, a MIRI research advisor and co-author of the leading textbook on artificial intelligence, argues in “The Long-Term Future of Artificial Intelligence” that we should integrate questions of robustness and safety into mainstream capabilities research: Our goal as a field is to make better decision-making systems. And that is the problem. […If] you’re going to build a superintelligent machine, you have to give it something that you want it to do. The danger is that you give it something that isn’t actually what you really want — because you’re not very good at expressing what you really want, or even knowing what you really want — until it’s too late and you see that you don’t like it. If you think about it just in terms of an optimization problem: The machine is solving an optimization problem for you, and you leave out some of the variables that you actually care about. Well, it’s in the nature of optimization problems that if the system gets to manipulate some variables that don’t form part of the objective function — so it’s free to play with those as much as it wants — often, in order to optimize the ones that it is supposed to optimize, it will set the other ones to extreme values. My proposal is that we should stop doing AI in its simple definition of just improving the decision-making capabilities of systems. […] With civil engineering, we don’t call it “building bridges that don’t fall down” — we just call it “building bridges.” Of course we don’t want them to fall down. And we should think the same way about AI: of course AI systems should be designed so that their actions are well-aligned with what human beings want. But it’s a difficult unsolved problem that hasn’t been part of the research agenda up to now. We want to change the field so that it feels like civil engineering or like nuclear fusion. [… We] created a hydrogen bomb explosion — unlimited amounts of energy, more than we could possibly use. But it wasn’t in a socially beneficial form. And now it’s just what fusion researchers do — containment is what fusion research is. That’s the problem that they work on. In line with Russell’s talk, MIRI’s work is aimed at helping jump-start a paradigm of AI research that is conscious of the field’s long-term impact. Our methodology is to break down the alignment problem into simpler and more precisely stated subproblems, develop basic mathematical theory for understanding these problems, and then make use of our newfound understanding in engineering applications.

Interesting data from their 2020 990 filing

The purpose of the non-profit is outlined in the filing as “To ensure that the creation of smarter-than-human intelligence has a positive impact. thus, the charitable purpose of the organization is to: a) perform research relevant to ensuring that smarter-than-human intelligence has a positive impact; b) raise awareness of this important issue; c) advise researchers, leasers and laypeople around the world; d) as necessary, implement a smarter-than-human intelligence with humane, stable goals.”.

When discussing its operations, they were defined as: “To ensure that the creation of smarter-than-human intelligence has a positive impact.”.

  • The state where the non-profit operates has been legally reported as GA.
  • The filing indicates that the non-profit's address in 2020 is located at 2036 BANCROFT WAY, BERKELEY, CA, 94704.
  • The non-profit has reported 26 employees on their form as of 2020.
  • Is not a private foundation.
  • Expenses are greater than $1,000,000.
  • Revenue is greater than $1,000,000.
  • Revenue less expenses is $2,047,849.
  • The organization has 3 independent voting members.
  • The organization was formed in 2000.
  • The organization pays $2,256,302 in salary, compensation, and benefits to its employees.
  • The organization pays $23,500 in fundraising expenses.

By donating on this page you are making an irrevocable contribution to Daffy Charitable Fund, a 501(c)(3) public charity, and a subsequent donation recommendation to the charity listed above, subject to our Member Agreement. Contributions are generally eligible for a charitable tax-deduction and a yearly consolidated receipt will be provided by Daffy. Processing fees may be applied and will reduce the value available to send to the end charity. The recipient organizations have not provided permission for this listing and have not reviewed the content.
Donations to organizations are distributed as soon as the donation is approved and the funds are available. In the rare event that Daffy is unable to fulfill the donation request to this charity, you will be notified and given the opportunity to choose another charity. This may occur if the charity is unresponsive or if the charity is no longer in good standing with regulatory authorities.