James Hansen, Pushker Kharecha, Makiko Sato
The COP28 Chairman and the United Nations Secretary General say that the goal to keep global warming below 1.5°C is alive, albeit barely, implying that the looser goal of the 2015 Paris Agreement (to keep warming well below 2°C) is still viable. We find that even the 2°C goal is dead if policy is limited to emission reductions and plausible CO₂ removal. IPCC (the Intergovernmental Panel on Climate Change, which advises the UN) has understated global warming in the pipeline and understated fossil fuel emissions in the pipeline via lack of realism in the Integrated Assessment Models that IPCC uses for climate projections. Wishful thinking as a policy approach must be replaced by transparent climate analysis, knowledge of the forcings that drive climate change, and realistic assessment of policy options. The next several years provide a narrow window of time to define actions that could still achieve a bright future for todays young people. We owe young people the knowledge and the tools to continually assess the situation and devise and adjust the course of action.
Our approach to analysis of global climate change, as described in Global Warming in the Pipeline,¹ puts comparable emphasis on (1) Earths paleoclimate history, (2) global climate models (GCMs), (3) modern observations of climate processes and climate change. One purpose of the Pipeline paper was to distinguish between this approach and that of IPCC, which puts principal emphasis on GCMs. GCMs are an essential tool, but the models must be consistent with Earths history and the projections of future climate must employ plausible scenarios for energy use and for the climate forcings that drive climate change.
Policy implications of climate science can be grasped from a basic understanding of the human-made forcings that are driving Earths climate away from the relatively stable climate of the Holocene (approximately the past 10,000 years). Our task is to provide understandable quantification of climate forcings and changes that will be needed to maintain a hospitable climate. Concerned public, including policymakers, must learn to appreciate basic graphs that summarize real-world data, because these must provide the basis for policy discussion.
1. CLIMATE SCIENCE
There are two major climate forcings: human-made greenhouse gases (GHGs) and aerosols (fine airborne particles). GHGs reduce Earths thermal (heat) radiation to space and are the main cause of global warming. Aerosols reflect sunlight to space, mainly via their effect as condensation nuclei for clouds; more nuclei lead to smaller cloud drops and brighter, longerlived, clouds. Aerosols thus cause a global cooling that partially offsets GHG warming.
Global warming in the pipeline
James E. Hansen¹*, Makiko Sato¹, Leon Simons², Larissa S. Nazarenko³,⁴, Isabelle Sangha¹, Pushker Kharecha¹, James C. Zachos⁵, Karina von Schuckmann⁶, Norman G. Loeb⁷, Matthew B. Osman⁸, Qinjian Jin⁹, George Tselioudis³, Eunbi Jeong¹⁰, Andrew Lacis³, Reto Ruedy³,¹¹, Gary Russell³, Junji Cao¹², Jing Li¹³
¹ Climate Science, Awareness and Solutions, Columbia University Earth Institute, New York, NY, USA
² The Club of Rome Netherlands, s-Hertogenbosch, The Netherlands
³ NASA Goddard Institute for Space Studies, New York, NY, USA
⁴ Center for Climate Systems Research, Columbia University Earth Institute, New York, NY, USA
⁵ Earth and Planetary Science, University of CA, Santa Cruz, CA, USA
⁶ Mercator Ocean International, Ramonville St., -Agne, France
⁷ NASA Langley Research Center, Hampton, VA, USA
⁸ Department of Geosciences, University of AZ, Tucson, AZ, USA
⁹ Department of Geography and Atmospheric Science, University of KS, Lawrence, KS, USA
¹⁰ CSAS KOREA, Goyang, Gyeonggi-do, South Korea
¹¹ Business Integra, Inc, New York, NY, USA
¹² Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing, China
¹³ Department of Atmospheric and Oceanic Sciences, School of Physics, Peking University, Beijing, China
* Correspondence address. Director of Climate Science, Awareness and Solutions, Earth Institute, Columbia University, 475 Riverside Drive, Ste. 401-O, New York, NY 10115, USA. E-mail: [email protected]
Improved knowledge of glacial-to-interglacial global temperature change yields Charney (fast-feedback) equilibrium climate sensitivity 1.2 ± 0.3°C (2r) per W/m², which is 4.8°C ± 1.2°C for doubled CO₂. Consistent analysis of temperature over the full Cenozoic eraincluding slow feedbacks by ice sheets and trace gasessupports this sensitivity and implies that CO₂ was 300350 ppm in the Pliocene and about 450 ppm at transition to a nearly ice-free planet, exposing unrealistic lethargy of ice sheet models. Equilibrium global warming for todays GHG amount is 10°C, which is reduced to 8°C by todays human-made aerosols. Equilibrium warming is not committed warming; rapid phaseout of GHG emissions would prevent most equilibrium warming from occurring. However, decline of aerosol emissions since 2010 should increase the 19702010 global warming rate of 0.18°C per decade to a post-2010 rate of at least 0.27°C per decade. Thus, under the present geopolitical approach to GHG emissions, global warming will exceed 1.5°C in the 2020s and 2°C before 2050. Impacts on people and nature will accelerate as global warming increases hydrologic (weather) extremes. The enormity of consequences demands a return to Holocene-level global temperature. Required actions include: (1) a global increasing price on GHG emissions accompanied by development of abundant, affordable, dispatchable clean energy, (2) East-West cooperation in a way that accommodates developing world needs, and (3) intervention with Earths radiation imbalance to phase down todays massive human-made geo-transformation of Earths climate. Current political crises present an opportunity for reset, especially if young people can grasp their situation.
Keywords: Aerosols; Climate Sensitivity; Paleoclimate; Global Warming; Energy Policy; Cenozoic
Background information and structure of paper
It has been known since the 1800s that infrared-absorbing (greenhouse) gases (GHGs) warm Earths surface and that the abundance of GHGs changes naturally as well as from human actions [1, 2].¹ Roger Revelle wrote in 1965 that we are conducting a vast geophysical experiment by burning fossil fuels that accumulated in Earths crust over hundreds of millions of years  Carbon dioxide (CO₂ ) in the air is now increasing and already has reached levels that have not existed for millions of years, with consequences that have yet to be determined. Jule Charney led a study in 1979 by the United States National Academy of Sciences that concluded that doubling of atmospheric CO₂ was likely to cause global warming of 3 ± 1.5°C . Charney added: However, we believe it is quite possible that the capacity of the intermediate waters of the ocean to absorb heat could delay the estimated warming by several decades. After U.S. President Jimmy Carter signed the 1980 Energy Security Act, which included a focus on unconventional fossil fuels such as coal gasification and rock fracturing (fracking) to extract shale oil and tight gas, the U.S. Congress asked the National Academy of Sciences again to assess potential climate effects. Their massive Changing Climate report had a measured tone on energy policyamounting to a call for research . Was not enough known to caution lawmakers against taxpayer subsidy of the most carbon-intensive fossil fuels? Perhaps the equanimity was due in part to a major error: the report assumed that the delay of global warming caused by the oceans thermal inertia is 15 years, independent of climate sensitivity. With that assumption, they concluded that climate sensitivity for 2 × CO₂ is near or below the low end of Charneys 1.54.5°C range. If climate sensitivity was low and the lag between emissions and climate response was only 15 years, climate change would not be nearly the threat that it is. Simultaneous with preparation of Changing Climate, climate sensitivity was addressed at the 1982 Ewing Symposium at the Lamont Doherty Geophysical Observatory of Columbia University on 2527 October, with papers published in January 1984 as a monograph of the American Geophysical Union . Paleoclimate data and global climate modeling together led to an inference that climate sensitivity is in the range 2.55°C for 2 × CO₂ and that climate response time to a forcing is of the order of a century, not 15 years . Thus, the concept that a large amount of additional human-made warming is already in the pipeline was introduced. E.E. David, Jr, President of Exxon Research and Engineering, in his keynote talk at the symposium insightfully noted : The critical problem is that the environmental impacts of the CO₂ buildup may be so long delayed. A look at the theory of feedback systems shows that where there is such a long delay, the system breaks down, unless there is anticipation built into the loop.
Thus, the danger caused by climates delayed response and the need for anticipatory action to alter the course of fossil fuel development was apparent to scientists and the fossil fuel industry 40 years ago.² Yet industry chose to long deny the need to change energy course , and now, while governments and financial interests connive, most industry adopts a greenwash approach that threatens to lock in perilous consequences for humanity. Scientists will share responsibility if we allow governments to rely on goals for future global GHG levels, as if targets had meaning in the absence of policies required to achieve them.
The Intergovernmental Panel on Climate Change (IPCC) was established in 1988 to provide scientific assessments on the state of knowledge about climate change  and almost all nations agreed to the 1992 United Nations Framework Convention on Climate Change  with the objective to avert dangerous anthropogenic interference with the climate system. The current IPCC Working Group 1 report  provides a best estimate of 3°C for equilibrium global climate sensitivity to 2 × CO₂ and describes shutdown of the overturning ocean circulations and large sea level rise on the century time scale as high impact, low probability even under extreme GHG growth scenarios. This contrasts with high impact, high probability assessments reached in a paper hereafter abbreviated Ice Meltthat several of us published in 2016. Recently, our papers first author (JEH) described a long-time effort to understand the effect of ocean mixing and aerosols on observed and projected climate change, which led to a conclusion that most climate models are unrealistically insensitive to freshwater injected by melting ice and that ice sheet models are unrealistically lethargic in the face of rapid, large climate change .
NREL: The Four Phases of Storage Deployment: A Framework for the Expanding Role of Storage in the U.S. Power System
(Please note, this is a publication of a national research lab. Copyright concerns are nil.)
Storage Futures Study
The Four Phases of Storage Deployment: A Framework for the Expanding Role of Storage in the U.S. Power System (PDF)
This report is one in a series of NRELs Storage Futures Study (SFS) publications. The SFS is a multiyear research project that explores the role and impact of energy storage in the evolution and operation of the U.S. power sector. The SFS is designed to examine the potential impact of energy storage technology advancement on the deployment of utility-scale storage and the adoption of distributed storage, and the implications for future power system infrastructure investment and operations. The research findings and supporting data will be published as a series of publications. The table on the next page lists the planned publications and specific research topics they will examine under the SFS.
This report, the first in the SFS series, explores the roles and opportunities for new, cost- competitive stationary energy storage with a conceptual framework based on four phases of current and potential future storage deployment, and presents a value proposition for energy storage that could result in substantial new cost-effective deployments. This conceptual framework provides a broader context for consideration of the later reports in the series, including the detailed results of the modeling and analysis of power system evolution scenarios and their operational implications.
The SFS series provides data and analysis in support of the U.S. Department of Energys Energy Storage Grand Challenge, a comprehensive program to accelerate the development, commercialization, and utilization of next-generation energy storage technologies and sustain American global leadership in energy storage. The Energy Storage Grand Challenge employs a use case framework to ensure storage technologies can cost-effectively meet specific needs, and it incorporates a broad range of technologies in several categories: electrochemical, electromechanical, thermal, flexible generation, flexible buildings, and power electronics.
More information, any supporting data associated with this report, links to other reports in the series, and other information about the broader study are available at https://www.nrel.gov/analysis/storage-futures.html.
The U.S. electricity system currently has about 24 GW of stationary energy storage with the majority of it being in the form of pumped storage hydropower (PSH). Given changing technologies and market conditions, the deployment expected in the coming decades is likely to include a mix of technologies. Declining costs of energy storage are increasing the likelihood that storage will grow in importance in the U.S. power system. This work uses insights from recent deployment trends, projections, and analyses to develop a framework that characterizes the value proposition of storage as a way to help utilities, regulators, and developers be better prepared for the role storage might play and to understand the need for careful analysis to ensure cost-optimal storage deployment.
To explore the roles and opportunities for new cost-competitive stationary energy storage, we use a conceptual framework based on four phases of current and potential future storage deployment (see Table ES-1). The four phases, which progress from shorter to longer duration, link the key metric of storage duration to possible future deployment opportunities, considering how the cost and value vary as a function of duration.
The 23 GW of PSH in the United States was built mostly before 1990 to provide peaking capacity and energy time-shifting for large, less flexible capacity. The economics of PSH allowed for deployment with multiple hours of capacity that allowed it to provide multiple grid services. These plants continue to provide valuable grid services that span the four phases framework, and their use has evolved to respond to a changing grid. However, a variety of factors led to a multidecade pause in new development with little storage deployment occurring from about 1990 until 2011.¹
Changing market conditions, such as the introduction of wholesale electricity markets and new technologies suggest storage deployment since 2011 may follow a somewhat different path, diverging from the deployment of exclusively 8+hour PSH. Instead, more recent deployment of storage has largely begun with shorter-duration storage, and we anticipate that new storage deployment will follow a trend of increasing durations.
We characterize this trend in our four phases framework, which captures how both the cost and value of storage changes as a function of duration. Many storage technologies have a significant cost associated with increasing the duration, or actual energy stored per unit of power capacity. In contrast, the value of most grid services does not necessarily increase with increasing asset durationit may have no increase in value beyond a certain duration, or its value may increase at a rapidly diminishing rate. As a result, the economic performance of most storage technologies will rapidly decline beyond a certain duration. In current U.S. electricity markets, the value of many grid services can be captured by discrete and relatively short-duration storage (such as less than 1 hour for most operating reserves or 4 hours for capacity).
Together, the increasing cost of storage with duration and the lack of incremental value with increasing storage duration will likely contribute to growth of storage in the U.S. power sector that is characterized by a progression of deployments that aligns duration with specific services and storage technologies.
The four phases conceptual framework introduced in this work is a simplification of a more complicated evolution of the stationary energy storage industry and the power system as a whole. While we present four distinct phases, the boundaries between each phase will be somewhat indistinct and transitions between phases will occur at different times in different regions as various markets for specific services are saturated, and phases can overlap within a region. These transitions and the total market sizes are strongly influenced by the regional deployment of variable renewable energy (VRE) as well as hybrid deployments. However, we believe it is a useful framework to consider the role of different storage technologies, and particularly the importance of duration in driving adoption in each phase.
Phase 1, which began around 2011, is characterized by the deployment of storage with 1-hour or shorter duration, and it resulted from the emergence of restructured markets and new technologies that allow for cost-competitive provision of operating reserves, including regulating reserves. Potential deployment of short-duration storage in Phase 1 is bounded by the overall requirements for operating reserves, which is less than 30 GW in the United States even when including regulating reserves, spinning contingency reserves, and frequency responsive reserves, some of which are not yet a widely compensated service.
Phase 2 is characterized by the deployment of storage with 26 hours of discharge duration to serve as peaking capacity. Phase 2 has begun in some regions, with lithium-ion batteries becoming cost-competitive where durations of 26 hours are sufficient to provide reliable peaking capacity. As prices continue to fall, batteries are expected to become cost-competitive in more locations. These storage assets derive much of their value from the replacement of traditional peaking resources, (primarily natural gas-fired combustion turbines), but they also take value from time-shifting/energy arbitrage of energy supply. The potential opportunities of Phase 2 are limited by the local or regional length of the peak demand period and have a lower bound of about 40 GW. However, the length of peak demand is highly affected by the deployment of VRE, specifically solar photovoltaics (PV), which narrows the peak demand period. Phase 2 is characterized in part by the positive feedback between PV increasing the value of storage (increasing its ability to provide capacity) and storage increasing the value of PV (increasing its energy value by shifting it output to periods of greater demand). Thus, greater deployment of solar PV could extend the storage potential of Phase 2 to more than 100 GW in the United States in scenarios where 25% of the nations electricity is derived from solar.
Phase 3 is less distinct, but is characterized by lower costs and technology improvements that enable storage to be cost-competitive while serving longer-duration (412 hour) peaks. These longer net load peaks can result from the addition of substantial 26 hour storage deployed in Phase 2. Deployment in Phase 3 could include a variety of new technologies and could also see a reemergence of pumped storage, taking advantage of new technologies that reduce costs and siting constraints while exploiting the 8+ hour durations typical of many pumped storage facilities. The technology options for Phase 3 include next-generation compressed air and various thermal or mechanical-based storage technologies. Also, storage in this phase might provide additional sources of value, such as transmission deferral and additional time-shifting of solar and wind generation to address diurnal mismatches of supply and demand. Our scenario analysis identified 100 GW or more of potential opportunities for Phase 3 in the United States, in addition to the existing PSH that provides valuable capacity in several regions. Of note for both Phase 2 and 3 is a likely mix of configurations, with some stand-alone storage, but also a potentially significant fraction of storage deployments associated with hybrid plants, where storage can take advantage of tax credits, or shared capital and operating expenses. As in Phase 2, additional VRE, especially solar PV, could extend the storage potential of Phase 3, enabling contributions of VRE exceeding 50% on an annual basis.
Phase 4 is the most uncertain of our phases. It characterizes a possible future in which storage with durations from days to months is used to achieve very high levels of renewable energy (RE) in the power sector, or as part of multisector decarbonization. Technologies options in this space include production of liquid and gas fuels, which can be stored in large underground formations that enable extremely long-duration storage with very low loss rates. This low loss rate allows for seasonal shifting of RE supply, and generation of a carbon-free fuel for industrial processes and feedstocks. Phase 4 technologies are generally characterized by high power-related costs associated with fuel production and use but with very low duration-related costs. Thus, traditional metrics such as cost per kilowatt-hour of storage capacity are less useful, and when combined with the potential use of fuels for non-electric sector applications, makes comparison of Phase 4 technologies with other storage technologies more difficult. The potential opportunities for Phase 4 technologies measure in the hundreds of gigawatts in the United States, and these technologies could potentially address the residual demand that is very difficult or expensive to meet with RE resources and storage deployed in Phases 13.
Our four phases framework is intended to describe a plausible evolution of cost-competitive storage technologies, but more importantly, it identifies key elements needed for stakeholders to evaluate alternative pathways for both storage and other sources of system flexibility. Specifically, an improved characterization of various grid services needed, including capacity and duration, could help provide a deeper understanding of the tradeoffs between various technologies, and non-storage resources such as responsive demand. Such a characterization would help ensure the mix of flexibility technologies deployed is robust to an evolving a grid, which will ultimately determine the amount of storage and flexibility the power system will need.
Global Warming in the Pipeline will be published in Oxford Open Climate Change of Oxford University Press next week. The paper describes an alternative perspective on global climate change alternative to that of the Intergovernmental Panel on Climate Change (IPCC), which provides scientific advice on climate change to the United Nations.
Our paper may be read as being critical of IPCC. But we have no criticism of individual scientists, who include world-leading researchers volunteering their time to produce IPCC reports. Rather we are questioning whether the IPCC procedure and product yield the advice that the public, especially young people, need to understand and protect their home planet.
Discussion of our paper will likely focus on differences between our conclusions and those of IPCC. I hope, however, that it may lead to consideration of some basic underlying matters.
Three-pronged analysis. IPCC climate analysis leans heavily on GCMs (global climate models), too heavily in my opinion. We prefer a comparable weight on (1) information from Earths paleoclimate history, (2) GCMs, and (3) observations of ongoing climate processes and climate change. This 3-pronged approach can result in rather complex papers, but, so, too, is the real-world complex. We use this 3-pronged approach in both the heavily peer-reviewed paper, Ice Melt, Sea Level Rise, and Superstorms, published in 2016 and in our present Global Warming in the Pipeline (these papers hereinafter abbreviated as Ice Melt and Pipeline, respectively). Below I note specific travails and consequences for the Ice Melt paper that resulted from the fact that our 3-pronged approach differed from that of IPCC. I hope that some explanation here may help avoid a similar fate for Pipeline, as the world is running short on time to develop a strategy to preserve a propitious climate for todays young people and their children.
§7111. Congressional findings
The Congress of the United States finds that
(Pub. L. 9591, title I, §101, Aug. 4, 1977, 91 Stat. 567.)
- the United States faces an increasing shortage of nonrenewable energy resources;
- this energy shortage and our increasing dependence on foreign energy supplies present a serious threat to the national security of the United States and to the health, safety and welfare of its citizens;
- a strong national energy program is needed to meet the present and future energy needs of the Nation consistent with overall national economic, environmental and social goals;
- responsibility for energy policy, regulation, and research, development and demonstration is fragmented in many departments and agencies and thus does not allow for the comprehensive, centralized focus necessary for effective coordination of energy supply and conservation programs; and
- formulation and implementation of a national energy program require the integration of major Federal energy functions into a single department in the executive branch.
§7112. Congressional declaration of purpose
The Congress therefore declares that the establishment of a Department of Energy is in the public interest and will promote the general welfare by assuring coordinated and effective administration of Federal energy policy and programs. It is the purpose of this chapter:
(Pub. L. 9591, title I, §102, Aug. 4, 1977, 91 Stat. 567; Pub. L. 101510, div. C, title XXXI, §3163, Nov. 5, 1990, 104 Stat. 1841.)
- To establish a Department of Energy in the executive branch.
- To achieve, through the Department, effective management of energy functions of the Federal Government, including consultation with the heads of other Federal departments and agencies in order to encourage them to establish and observe policies consistent with a coordinated energy policy, and to promote maximum possible energy conservation measures in connection with the activities within their respective jurisdictions.
- To provide for a mechanism through which a coordinated national energy policy can be formulated and implemented to deal with the short-, mid- and long-term energy problems of the Nation; and to develop plans and programs for dealing with domestic energy production and import shortages.
- To create and implement a comprehensive energy conservation strategy that will receive the highest priority in the national energy program.
- To carry out the planning, coordination, support, and management of a balanced and comprehensive energy research and development program, including
- assessing the requirements for energy research and development;
- developing priorities necessary to meet those requirements;
- undertaking programs for the optimal development of the various forms of energy production and conservation; and
- disseminating information resulting from such programs, including disseminating information on the commercial feasibility and use of energy from fossil, nuclear, solar, geothermal, and other energy technologies.
- To place major emphasis on the development and commercial use of solar, geothermal, recycling and other technologies utilizing renewable energy resources.
- To continue and improve the effectiveness and objectivity of a central energy data collection and analysis program within the Department.
- To facilitate establishment of an effective strategy for distributing and allocating fuels in periods of short supply and to provide for the administration of a national energy supply reserve.
- To promote the interests of consumers through the provision of an adequate and reliable supply of energy at the lowest reasonable cost.
- To establish and implement through the Department, in coordination with the Secretaries of State, Treasury, and Defense, policies regarding international energy issues that have a direct impact on research, development, utilization, supply, and conservation of energy in the United States and to undertake activities involving the integration of domestic and foreign policy relating to energy, including provision of independent technical advice to the President on international negotiations involving energy resources, energy technologies, or nuclear weapons issues, except that the Secretary of State shall continue to exercise primary authority for the conduct of foreign policy relating to energy and nuclear nonproliferation, pursuant to policy guidelines established by the President.
- To provide for the cooperation of Federal, State, and local governments in the development and implementation of national energy policies and programs.
- To foster and assure competition among parties engaged in the supply of energy and fuels.
- To assure incorporation of national environmental protection goals in the formulation and implementation of energy programs, and to advance the goals of restoring, protecting, and enhancing environmental quality, and assuring public health and safety.
- To assure, to the maximum extent practicable, that the productive capacity of private enterprise shall be utilized in the development and achievement of the policies and purposes of this chapter.
- To provide for, encourage, and assist public participation in the development and enforcement of national energy programs.
- To create an awareness of, and responsibility for, the fuel and energy needs of rural and urban residents as such needs pertain to home heating and cooling, transportation, agricultural production, electrical generation, conservation, and research and development.
- To foster insofar as possible the continued good health of the Nation's small business firms, public utility districts, municipal utilities, and private cooperatives involved in energy production, transportation, research, development, demonstration, marketing, and merchandising.
- To provide for the administration of the functions of the Energy Research and Development Administration related to nuclear weapons and national security which are transferred to the Department by this chapter.
- To ensure that the Department can continue current support of mathematics, science, and engineering education programs by using the personnel, facilities, equipment, and resources of its laboratories and by working with State and local education agencies, institutions of higher education, and business and industry. The Department's involvement in mathematics, science, and engineering education should be consistent with its main mission and should be coordinated with all Federal efforts in mathematics, science, and engineering education, especially with the Department of Education and the National Science Foundation (which have the primary Federal responsibility for mathematics, science, and engineering education).
(Please note. This is a publication by The National Renewable Energy Laboratory - NREL. Copyright concerns are nil.)
This is the goal. No major technological breakthroughs are required, just commitment and a lot of work. Four paths are explored. Pick your favorite, or a combination.
An NREL study shows there are multiple pathways to 100% clean electricity by 2035 that would produce significant benefits exceeding the additional power system costs.
For the study, funded by the U.S. Department of Energys Office of Energy Efficiency and Renewable Energy, NREL modeled technology deployment, costs, benefits, and challenges to decarbonize the U.S. power sector by 2035, evaluating a range of future scenarios to achieve a net-zero power grid by 2035.
The exact technology mix and costs will be determined by research and development, among other factors, over the next decade. The results are published in Examining Supply-Side Options To Achieve 100% Clean Electricity by 2035.
To examine what it would take to achieve a net-zero U.S. power grid by 2035, NREL leveraged decades of research on high-renewable power systems, from the Renewable Electricity Futures Study, to the Storage Futures Study, to the Los Angeles 100% Renewable Energy Study, to the Electrification Futures Study, and more.
NREL used its publicly available flagship Regional Energy Deployment System capacity expansion model to study supply-side scenarios representing a range of possible pathways to a net-zero power grid by 2035from the most to the least optimistic availability and costs of technologies.
The scenarios apply a carbon constraint to:
- Achieve 100% clean electricity by 2035 under accelerated demand electrification
- Reduce economywide, energy-related emissions by 62% in 2035 relative to 2005 levelsa steppingstone to economywide decarbonization by 2050.
Technology Deployment Must Rapidly Scale Up
In all modeled scenarios, new clean energy technologies are deployed at an unprecedented scale and rate to achieve 100% clean electricity by 2035. As modeled, wind and solar energy provide 60%80% of generation in the least-cost electricity mix in 2035, and the overall generation capacity grows to roughly three times the 2020 level by 2035including a combined 2 terawatts of wind and solar.
To achieve those levels would require rapid and sustained growth in installations of solar and wind generation capacity. If there are challenges with siting and land use to be able to deploy this new generation capacity and associated transmission, nuclear capacity helps make up the difference and more than doubles todays installed capacity by 2035.
Across the four scenarios, 58 gigawatts of new hydropower and 35 gigawatts of new geothermal capacity are also deployed by 2035. Diurnal storage (212 hours of capacity) also increases across all scenarios, with 120350 gigawatts deployed by 2035 to ensure demand for electricity is met during all hours of the year.
Seasonal storage becomes important when clean electricity makes up about 80%95% of generation and there is a multiday to seasonal mismatch of variable renewable supply and demand. Across the scenarios, seasonal capacity in 2035 ranges about 100680 gigawatts.
Significant additional research is needed to understand the manufacturing and supply chain associated with the unprecedent deployment envisioned in the scenarios.
Download Infographic. (PDF) View Data.
Significant Additional Transmission Capacity
In all scenarios, significant transmission is also added in many locations, mostly to deliver energy from wind-rich regions to major load centers in the eastern United States. As modeled, the total transmission capacity in 2035 is one to almost three times todays capacity, which would require between 1,400 and 10,100 miles of new high-capacity lines per year, assuming new construction starts in 2026.
Climate and Health Benefits of Decarbonization Offset the Costs
NREL finds in all modeled scenarios the health and climate benefits associated with fewer emissions offset the power system costs to get to 100% clean electricity.
Decarbonizing the power grid by 2035 could total $330 billion to $740 billion in additional power system costs, depending on restrictions on new transmission and other infrastructure development. However, there is substantial reduction in petroleum use in transportation and natural gas in buildings and industry by 2035. As a result, up to 130,000 premature deaths are avoided by 2035, which could save between $390 billion to $400 billion in avoided mortality costs.
When factoring in the avoided cost of damage from floods, drought, wildfires, and hurricanes due to climate change, the United States could save over an additional $1.2 trilliontotaling an overall net benefit to society ranging from $920 billion to $1.2 trillion.
Necessary Actions To Achieve 100% Clean Electricity
The transition to a 100% clean electricity U.S. power system will require more than reduced technology costs. Several key actions will need to take place in the coming decade:
- Dramatic acceleration of electrification and increased efficiency in demand
- New energy infrastructure installed rapidly throughout the country
- Expanded clean technology manufacturing and the supply chain
- Continued research, development, demonstration, and deployment to bring emerging technologies to the market.
Failing to achieve any of the key actions could increase the difficulty of realizing the scenarios outlined in the study.
The US has a relatively low total fertility rate (births per woman) compared to Africa:
Map of countries by fertility rate (2018), according to CIA World Factbook
Yet, the US is the primary source of carbon dioxide emissions:
Countries by carbon dioxide emissions in thousands of tonnes per annum, via the burning of fossil fuels (blue the highest and green the lowest).
The reason is our very high per capita CO₂ emissions:
Birth rates clearly are not the cause of "climate change."
By harping about "birth control," US citizens can blame Africans for "climate change" because their birth rate is so high. If our per capita CO₂ emissions matched Africas, we wouldnt be in this predicament.
Thats why I have decided the meme is racist in nature.
Its also a convenient excuse not to do something difficult, like cutting our per capita emissions. After all, such efforts are useless if those Africans are going to keep breeding? (Right?)
(Please note: Story from NASA copyright concerns are nil.)
New Studies Increase Confidence in NASA's Measure of Earth's Temperature
By Jessica Merzdorf,
NASA's Goddard Space Flight Center
Earths long-term warming trend can be seen in this visualization of NASAs global temperature record, which shows how the planets temperatures are changing over time, compared to a baseline average from 1951 to 1980. The record is shown as a running five-year average. Credit: NASAs Scientific Visualization Studio/Kathryn Mersmann. Download related visualizations here.
A new assessment of NASA's record of global temperatures revealed that the agency's estimate of Earth's long-term temperature rise in recent decades is accurate to within less than a tenth of a degree Fahrenheit, providing confidence that past and future research is correctly capturing rising surface temperatures.
The most complete assessment ever of statistical uncertainty within the GISS Surface Temperature Analysis (GISTEMP) data product shows that the annual values are likely accurate to within 0.09 degrees Fahrenheit (0.05 degrees Celsius) in recent decades, and 0.27 degrees Fahrenheit (0.15 degrees C) at the beginning of the nearly 140-year record.
This data record, maintained by NASAs Goddard Institute for Space Studies (GISS) in New York City, is one of a handful kept by major science institutions around the world that track Earth's temperature and how it has risen in recent decades. This global temperature record has provided one of the most direct benchmarks of how our home planet's climate has changed as greenhouse gas concentrations rise.
The study also confirms what researchers have been saying for some time now: that Earth's global temperature increase since 1880 about 2 degrees Fahrenheit, or a little more than 1 degree Celsius cannot be explained by any uncertainty or error in the data. Going forward, this assessment will give scientists the tools to explain their results with greater confidence.
GISTEMP is a widely used index of global mean surface temperature anomaly it shows how much warmer or cooler than normal Earths surface is in a given year. "Normal" is defined as the average during a baseline period of 1951-80.
NASA uses GISTEMP in its annual global temperature update, in partnership with the National Oceanic and Atmospheric Administration. (In 2019, NASA and NOAA found that 2018 was the fourth-warmest year on record, with 2016 holding the top spot.) The index includes land and sea surface temperature data back to 1880, and today incorporates measurements from 6,300 weather stations, research stations, ships and buoys around the world.
Previously, GISTEMP provided an estimate of uncertainty accounting for the spatial gaps between weather stations. Like other surface temperature records, GISTEMP estimates the temperatures between weather stations using data from the closest stations, a process called interpolation. Quantifying the statistical uncertainty present in those estimates helped researchers to be confident that the interpolation was accurate.
Uncertainty is important to understand because we know that in the real world we dont know everything perfectly, said Gavin Schmidt, director of GISS and a co-author on the study. All science is based on knowing the limitations of the numbers that you come up with, and those uncertainties can determine whether what youre seeing is a shift or a change that is actually important.
The study found that individual and systematic changes in measuring temperature over time were the most significant source of uncertainty. Also contributing was the degree of weather station coverage. Data interpolation between stations contributed some uncertainty, as did the process of standardizing data that was collected with different methods at different points in history.
After adding these components together, GISTEMPs uncertainty value in recent years was still less than a tenth of a degree Fahrenheit, which is very small, Schmidt said.
The team used the updated model to reaffirm that 2016 was very probably the warmest year in the record, with an 86.2 percent likelihood. The next most likely candidate for warmest year on record was 2017, with a 12.5 percent probability.
Weve made the uncertainty quantification more rigorous, and the conclusion to come out of the study was that we can have confidence in the accuracy of our global temperature series, said lead author Nathan Lenssen, a doctoral student at Columbia University. We dont have to restate any conclusions based on this analysis.
Another recent study evaluated GISTEMP in a different way that also added confidence to its estimate of long-term warming. A paper published in March 2019, led by Joel Susskind of NASA's Goddard Space Flight Center, compared GISTEMP data with that of the Atmospheric Infrared Sounder (AIRS), onboard NASA's Aqua satellite.
GISTEMP uses air temperature recorded with thermometers slightly above the ground or sea, while AIRS uses infrared sensing to measure the temperature right at the Earth's surface (or skin temperature) from space. The AIRS record of temperature change since 2003 (which begins when Aqua launched) closely matched the GISTEMP record.
Comparing two measurements that were similar but recorded in very different ways ensured that they were independent of each other, Schmidt said. One difference was that AIRS showed more warming in the northernmost latitudes.
The Arctic is one of the places we already detected was warming the most. The AIRS data suggests that its warming even faster than we thought, said Schmidt, who was also a co-author on the Susskind paper.
Taken together, Schmidt said, the two studies help establish GISTEMP as a reliable index for current and future climate research.
Each of those is a way in which you can try and provide evidence that what youre doing is real, Schmidt said. Were testing the robustness of the method itself, the robustness of the assumptions, and of the final result against a totally independent data set.
In all cases, he said, the resulting trends are more robust than what can be accounted for by any uncertainty in the data or methods.
Access the paper here.
Profile InformationMember since: Mon Mar 6, 2006, 03:51 PM
Number of posts: 19,759