HomeLatest ThreadsGreatest ThreadsForums & GroupsMy SubscriptionsMy Posts
DU Home » Latest Threads » NNadir » Journal
Page: 1 2 Next »


Profile Information

Gender: Male
Current location: New Jersey
Member since: 2002
Number of posts: 28,189

Journal Archives

Star Dancer

1987 Harry Fonseca (1946-2006): Maidu-American

At the Eiteljorg Museum, Indianapolis, Indiana.


B.B. King & John Mayer

On the combustion of biomass in oxygen enriched carbon dioxide atmospheres.

The paper I'll discuss in this thread is this one: Combustion Characteristics and Pollutant Emissions in Transient Oxy-Combustion of a Single Biomass Particle: A Numerical Study (Wang et al Energy Fuels, 2019, 33 (2), pp 1556–1569)

In general, I'm an opponent of so called "renewable energy" since I think the very term, owing to the low energy to mass ratio associated with it which has huge environmental implications, as well the fact that they intermittent, which impose a high thermodynamic (and thus, in another way, environmental) cost, represents an absurd, if hidden, oxymoron. "Renewable energy" is not really "renewable." It's consumptive.

These limitations are the reason that solar and wind for example, are useless to address climate change and is the reason why, after spending trillions of dollars on them, they have done nothing at all to slow the acceleration of climate degradation via the destruction of the planetary atmosphere. My view if they were not trivial forms of energy - although it is unlikely that they will ever be anything other than trivial - their environmental consequences would be obvious, but they are not obvious, layered under so much popular hype, obfuscation and hand waving, although it one actually looks one can in fact find out what those environmental costs actually are.

Actually the most successful form of so called "renewable energy" is also the most deadly: The combustion of biomass is responsible for slightly less than one half of the 7 million air pollution deaths that occur each year.

Global, regional, and national comparative risk assessment of 79 behavioural, environmental and occupational, and metabolic risks or clusters of risks, 1990–2015: a systematic analysis for the Global Burden of Disease Study 2015 (Lancet 2016; 388: 1659–724)

However the combustion of biomass is potentially capable of becoming a very clean form of energy, and to the extent that the carbon dioxide can be captured and put to use, it is technically feasible that it could actually be carbon negative, although the claims that it is carbon neutral as currently practiced is at best, dubious.

The means to doing this would involve combustion in a closed system, that is a system that has no smokestack, no exhaust. This is only possible really under two conditions, one being the famous and often discussed "chemical looping" process where an oxygen carrier, generally a multivalent metal such as iron or cerium, is oxidized by air and then reduced by biomass by what is effectively combustion, releasing energy. I like to read about these systems for fun, but my feeling is that in practical engineering terms there are certain mass transfer features that make them problematic. To my knowledge, no large scale or even pilot scale chemical looping device exists. The second condition is to burn the fuel in pure (or as we shall see) oxygen mixtures other than air.

These conditions appeal to me, as I have been very interested in thermochemical water and carbon dioxide splitting cycles and have written in this space (and elsewhere) about them. Both types of cycles are designed to produce pure oxygen; and there are also cycles - albeit somewhat more obscure - that produce hydrogen (or its potential surrogate, carbon monoxide) and equimolar mixtures of carbon dioxide and oxygen.

I've been fascinated by this latter stream, equimolar oxygen and carbon dioxide mixtures, because I imagine many useful applications for them, but I haven't seen very much written about them, at least until I came across the recent paper cited at the outset of this post.

Of course, simply because I haven't heard of something about which I've speculated doesn't imply that it hasn't already been studied in significant detail; I'm not that smart nor am I that well read. This paper refers to actual experiments that have been done along these lines. Here is reference 37 in the paper, which I have not read but will access in the future:

(37) Khatami, R.; Stivers, C.; Joshi, K.; Levendis, Y. A.; Sarofim, A. F. Combustion behaviour of single particles from three different coal ranks and from sugar cane bagasse in O2/N2 and O2/CO2 atmospheres. Combust. Flame 2012, 159, 1253−1271.

The paper currently under discussion is a paper about the mathematical modeling the combustion of biomass in atmospheres other than air, and it compares the mathematical modeling therein with the results reported in reference 37.

From the introductory paragraphs of the paper:

The growing concerns about global warming and issues around energy security have turned renewable sources of energy into the main means of addressing world energy demands.1 Biomass is regarded as a promising renewable fuel and has seen an increased tendency in use. Pulverized combustion for power generation, similar to that for coal, is perhaps the most common technology for utilizing biomass energy,2 which is being promoted worldwide.3 A large amount of carbon dioxide (CO2) generated from coal-fired power plants is now a serious issue, and thus, different methods have been developed for carbon capture and storage (CCS).4 Among these, oxy-fuel combustion is regarded as the most promising CCS technique for power station utilization.4 It is, however, noted that provision of oxygen through low-carbon processes is an important prerequisite to this. Due to carbon neutrality of biomass, application of CCS to biomass-fired stations can lead to negative carbon generation, which is an attractive method of decarbonising the atmosphere. Successful implementation of oxy-combustion of biomass requires an understanding of the underlying physicochemical processes under O2/CO2 by O2/ N2 atmospheres. Yet, some aspects of oxy-coal/biomass combustion including the volatiles matter evolution, homogeneous reactions, and heterogeneous combustion of char are quite complex and far from being fully understood and thus require further research...

What follows is a few paragraphs which represent a brief review of several papers on combustion in non-air oxygen enriched atmospheres as well as a brief reference to a problem with biomass combustion, corrosion of the combustion chambers owing to the serious (and often deadly) pollutants it generates, specifically nitrogen oxides and sulfates. Then the raison d’être for the paper is given:

...The preceding review of the literature indicates that, so far, most investigations have been focused on coal or char combustion, and there are only a few studies on a single biomass particle under oxy-fuel conditions. More importantly, the existence of inconsistent and sometimes conflicting results on NOx and SOx emissions highly necessitates conduction of further investigations. Thus, the current work performs a numerical study of combustion of a single biomass particle under O2/N2 and O2/CO2 environments with varying oxygen concentration. The spatiotemporal distributions of the temperature and species fields are analyzed, and NOx and SOx emissions are evaluated to provide a deeper insight into the underlying physicochemical phenomena.

A little bit about the theory behind the model which involves the numerical evaluation of a bunch of differential equations.

Some flavor:

The numerical simulations are conducted by using ANSYS Fluent 15.0. A Euler−Lagrange numerical model with standard k−ε turbulence model, weighted-sum-of-gray-gases model (WSGGM), and P-1 radiation model (spherical harmonic method) was implemented.38 Further, the SIMPLE algorithm was used for velocity−pressure coupling,39 and the effect of gravity was added to the numerical simulations. The computational model simultaneously solves the following governing equations. The conservation of mass is given by

Conservation of momentum in axial and radial directions read

The balance of energy for the reactive flow is written as

and the conservation of species (sic) is expressed by

The ideal gas law for the multicomponent gas is written as

There are, of course, many far more sophisticated mathematical models for multi-component gas mixtures built around various gas equations, but as one can glean, there is already a lot of computer time here invested in this project, and when one gets to the meat of the results, they're fairly accurate when compared with the experimental results found in reference 37, generally under 3.00%, with the exception that is represented by the less than interesting case of depleted air, 20% oxygen and 80% nitrogen.

Here's a picture of the geometry of the simulating chamber:

The caption:

Figure 1. Schematic of axis-symmetric domain used for the numerical

A graphic on nitrogen flows, including some hydrogen cyanide:

The caption:

Figure 2. Fuel−NOx pathways

Here is the flavor of what the simulations graphic output looks like, this one referring to the temperature of a biomass particle falling in the chambers of the experimental system being modeled.

The caption:

Figure 4. Spatiotemporal distribution of the mass fraction of CO2: (a) 37% O2/CO2 (2, 6, 10, 14, and 18 ms) and (b) 100% O2 (3, 5, 7, 9, and 11 ms).

Another interesting example of the same:

The caption:

Figure 8. History of mass-averaged mole fraction of the major gaseous species during single biomass particle combustion: (a) 27% O2 and 71% N2, (b) 100% O2, (c) 37% O2 and 63% CO2, and (d) 77% O2 and 23% CO2.

Another interesting one:

The caption:

Figure 9. Overall PPM in different atmospheres during single-Bagasse particle combustion: (a) NO and (b) SO2

A nice graphical overview of all the results of these simulations:

The caption:

Figure 11. Species versus particle mass reduction during single-Bagasse particle combustion: (a and b) 37% O2 and 63% N2, (c and d) 77% O2 and 23% N2, (e and f) 37% O2 and 63% CO2, and (g and h) 77% O2 and 23% CO2.

Another nice summary graphic summary:

The caption:

Figure 12. Species formation percentage for devolatilization and char combustion processes under different gas conditions: (a) HCN, (b) NH3, (c) SO2, and (d) H2S.

A few remarks out of the conclusion that are of interest:

• The combustion behavior of single biomass particle is significantly different in O2/N2 and O2/CO2 atmospheres. The volatile matters combust prior to ignition of the particle in O2/CO2, while the volatiles and chars combust sequentially in O2/N2 conditions.

• Under CO2 atmosphere, the production and depletion process of CO is majorly affected by the large amount of CO2 existing in the background gas.

Tomorrow morning I'll be attending a lecture of how we might "adapt" to climate change in New Jersey.



The fact is we will be forced to adapt, as best we can...or die.

The reason is that we are doing nothing serious to address climate change other than to dump responsibility for our indifference on all future generations of all living things.

I'm sorry, but solar roofs on McMansions, and converting the entire continental shelf into industrial parks for wind turbines that will be in landfills in twenty years won't work, nor will worshiping Elon Musk's stupid car for rich people, nor any of the other horseshit we hear about endlessly while things deteriorate faster and faster.

None of this has worked; none of it is working, and again, and again and again, it won't work.

Sorry, it's just reality.

It does seem that it's technically feasible to find away out, but we'd rather recite dogma than actually try something different.

However that is, little obscure papers like this, are a little bit of hope, and as I near the end of my life it's all I have...a little bit of hope.

Have a pleasant weekend.

Estimating the Age of Life Using Moore's Law.

I absolutely have to watch this video lecture, but have no time now.

COLLOQUIUM: Estimating the Age of Life Using Moore's Law

I only have time to watch a few minutes; I have a huge meeting tomorrow and need to get to bed early.

I watched the first few minutes, in which the speaker, a biologist said (I paraphrase), "It's a great honor to be here. Physicists probably would be more interested in this lecture than biologists."

I have a feeling they'll be some astrobiology in it. I love that stuff.

Recovery of Phosphorous From the Supercritical Water Gasification of Dried Sewage Sludge.

The paper I'll discuss in this post is this one: Behavior of Phosphorus in Catalytic Supercritical Water Gasification of Dewatered Sewage Sludge: The Conversion Pathway and Effect of Alkaline Additive (Chenyu Wang†§ , Wei Zhu*†‡, Cheng Chen†, Hao Zhang†, Yujie Fan§, Biao Mu†, and Jun Zhong†, Energy & Fuels, 2019, 33 (2), pp 1290–1295)

One of the big problems humanity faces in the long term, not that we're particularly interested in the lives of future generations, unlike the generations that preceded us, is the availability of phosphorous, which drove the "green revolution" of the 1950's which wasn't all about happy talk about so called "renewable energy" but was more about feeding humanity, that is about agriculture.

As a fan of the possibilities connected with supercritical fluids, in particular supercritical water, this paper about phosphorous recovery caught my eye.

From the introductory text:

Phosphorus is an essential element for all life forms and it is estimated that the remaining accessible reserves of phosphate rock on the earth will run out in 50 years if the growth of demand for fertilizers remains at 3% per year.(1) For this reason, the recovery of phosphorus is very necessary. Dewatered sewage sludge (DSS) is an inevitable by-product of sewage treatment. It is difficult to dispose and is a source of environmental pollution risks because of its high moisture content and complex organic components. However, because of the large amount of phosphorus enriched in sludge during the sewage treatment process,(2) it has a high phosphorus recovery potential.

Supercritical water gasification (SCWG) of sewage sludge has been receiving widespread attention in recent years,(3) because it is a method that can decompose pollutants in sewage sludge and, at the same time, can produce syngas (hydrogen, methane, carbon monoxide, and so on), a clean energy resource.(4) However, DSS contains many macromolecular substances such as lignin and humus, which inhibit gasification to some degree. In addition, the reaction conditions of SCWG are harsh, requiring a significant amount of energy for the water to reach a supercritical state. Thus, it is difficult to justify the high cost of operation if the only product obtained from the process is syngas. However, if large amounts of phosphorus can be recovered simultaneously with syngas, then, the product value of SCWG of DSS will be improved effectively.

To achieve high phosphorus recovery from DSS, it is necessary to study the regulation of the transformation of phosphorus during the gasification of DSS in supercritical water. In our previous work,(5) the DSS was treated in an autoclave at a reaction temperature of 400–500 °C without adding a catalyst. The organic phosphorus in the sludge was almost completely converted into inorganic phosphorus after the reaction, yielding a large amount of phosphorus that reached 20 mg/g in the solid residue...

The problem with phosphorous in the solid residues is that it is not easy to recover.

The authors reference a number of other publications discussing this problem, noting that a scheme for gasifying algae produced an enriched phosphorous liquid phase in the presence of alkaline salts.

...Therefore, knowledge based on the current research rests on two common areas of understanding. The first is that after hydrothermal treatment, organic phosphorus will be converted into inorganic phosphorus. The second is that inorganic phosphorus is mainly enriched in the solid phase products after hydrothermal treatment. However, the regulation of phosphorus transformation and the pathway it takes under catalytic conditions in the SCWG of DSS are still poorly understood. In this work, Na2CO3 and K2CO3 were used as homogeneous alkaline catalysts to further study (1) the transformation of phosphorus during sludge gasification in supercritical water and (2) the effects of alkaline additives on phosphorus behavior and the mechanism involved in such effects. On the basis of our results, we propose a strategy for the recovery of phosphorus from gasification products of DSS in supercritical water.

They gasified sewage sludge obtained from the Nanjing Sewage Treatment plant.

After drying, portions were removed as samples were burned at 550C for 4 hours to obtain total organic carbon.

The remaining portions were gasified in supercritical water at 400C, 23 MPa pressure, for 30 minutes.

Some graphics about their results:

The caption:

Figure 1. Effect of the amount of alkaline additives on the phosphorus content of the liquid product (400 °C, 30 min).

The caption:

Figure 2. Distribution of total phosphorus in solid residue (S-TP) and liquid product (L-TP) (400 °C, 30 min).

Phosphorous in the solid phase was determined to exist in a number of forms:

Phosphorus in the solid residue exists in either organic or inorganic forms. The inorganic phosphorus includes exchangeable phosphorus (Ex-P), aluminum-combined phosphorus (Al-P), iron-combined phosphorus (Fe-P), occluded phosphate (Oc-P), self-ecological phosphorite, and debris phosphorus. Self-ecological phosphorite and debris phosphorus both belong to calcium-combined phosphorus (Ca-P). The respective contents of the various forms of phosphorus in dry raw sludge and solid residues after SCWG with different alkaline additives were determined. The results are shown in Figure 3. In the raw sludge, phosphorus was mainly in the form of inorganic phosphorus, and the content of organic phosphorus was only 0.14 mg/g. The value we obtained for the content of organic phosphorus in raw sludge is lower than the values measured by other scholars,(5,7) which may be mainly due to the difference in sludge properties and sewage treatment processes.

Figure 3:

The caption:

Figure 3. Effect of the amount of (a) Na2CO3 and (b) K2CO3 on phosphorus forms in solid residues (400 °C, 30 min).

The addition of alkali metal carbonates increases the liquid fraction. It also works to change the aluminum speciation:

Quartz (SiO2) was mainly detected in raw sludge and in the solid residues without alkaline additives. When Na2CO3 was added, the sodium ions combined with aluminum ions and SiO2 to form analcime (NaAlSi2O6). On the other hand, when K2CO3 was added, the potassium ions combined with aluminum ions and SiO2 to form kalsilite (KAlSiO4). Aluminum ions tend to combine with alkali metal ions, and the phosphate ions that were originally bound to the aluminum ions are released into liquid products. The results of XRD indicate that the reduction of Al-P is related to the addition of alkali metal ions. Moreover, the addition of K2CO3 is related to a weaker detection peak signal of SiO2 compared to the addition of Na2CO3

The XRD spectra:

The caption:

Figure 4. XRD patterns of (1) raw dry sludge and solid residue, (2) without additive, (3) with 4 wt % Na2CO3, and (4) with 4 wt % K2CO3 (400 °C, 30 min).

The caption:

Figure 5. (a) Olsen-P content of the solid residue and (b) amount and proportion of DRP in the liquid product (400 °C, 30 min).

Olsen-P refers to an analytical method, DRP refers to "dissolved reactive phosphorous."

The control of aluminum speciation is believed by the authors to be one of the mechanisms allowing for the release of phosphorous into the liquid phase.

In the presence of alkaline additives, the transformation of phosphorus occurs not only in the solid phase but also between solid and liquid phases. The conversion of phosphorus in the solid residue follows the same pathway that operates without alkaline additives, and the pathway between solid and liquid phases mainly follows two routes.

In the first, as shown in eq 3, the phosphorus that was originally combined with calcium releases into liquid product under action with alkaline additives

In the second route, as shown in eq 4, alkali metal ions combine with Al to form analcime or kalsilite, and phosphorus, which was originally combined with aluminum, is released into the liquid phase.

This graphic summarizes the author's view of the mechanism:

The caption:

Figure 6. Proposed pathway of phosphorus transformation in the SCWG of sewage sludge with alkaline additives (M+ represents the Na+ or K+ and m = 0, 1, 2).

Phosphorous distribution:

The caption:


The behavior of phosphorus during sludge catalytic gasification with alkaline additive in SCW was studied. Without an alkaline additive, the dominant reaction process is the conversion of different forms of phosphorus in solid phase, and 98.9% of the phosphorus enriches in the solid residues. Adding an alkaline additive can effectively promote the transfer of phosphorus from the solid phase to the liquid phase. Alkaline additives combine with Ca2+ and Al3+ to form calcium carbonate, analcime and kalsilite, and the phosphorus that was originally combined with Ca2+ or Al3+ is released to the liquid phase in the form of phosphate. The highest content of phosphorus in the liquid product reached 2214.5 mg/L, which is equivalent to the yield of other phosphorus recovery methods by chemical extraction. Direct production of liquid products with a high phosphorus concentration can simplify the exaction steps during subsequent phosphorus recovery. Therefore, the recovery of phosphorus from municipal sewage sludge by SCWG has great potential.

My view is that supercritical water oxidation is a critical technology if we ever hope to be serious about climate change, although there is little evidence we will ever be so. We hold the future in contempt.

Critical problems with supercritical water oxidation of biomass, of which sewage sludge is a subset, however involves issues of corrosion of reactors, driven in part by potassium interactions. Nonetheless I believe this materials science problem has a solution.

I trust you will have a pleasant Sunday evening.

The Search for Hydrophobic Deep Eutectic Solvents From Natural Materials.

The paper I'll discuss in this post is this one: A Search for Natural Hydrophobic Deep Eutectic Solvents Based on Natural Components

Increasingly the quality of water supplies around the world is being degraded by waste materials represented not only by industrial practices, but also by fecal and agricultural waste products, as well as the results of this contamination, represented by eutrophic blooms like those that have destroyed the Mississippi Delta and are responsible for disasters like the microsystis blooms that left water supplies in Lake Erie highly toxic in 2015.

Potentially, one of the least energy intensive procedures for the removal of toxins from water is solvent extraction - with the important caveat the solvent in question not be toxic itself and not be very soluble in water, that is what chemists call "hydrophobic."

Also it is desirable to have such solvents for industrial processes, for example, the recovery of valuable materials from used nuclear fuels in order to provide sustainable energy. From my perspective, as a student and advocate of nuclear fuel reprocessing, the industrial process that has been in use for over 50 years - albeit with many modifications and tweaks - the PUREX process, depends on kerosene, an unsustainable product of the dangerous fossil fuel industry that is not sustainable. Therefore if solvent extraction continues to be used for nuclear fuel reprocessing, at the very least, sustainable hydrophobic solvents must be utilized. (I'm not really a fan of solvent extraction of actinides, but if we use solvent, we should do everything possible to divorce them from dangerous fossil fuels.)

In addition, to the extent that we can utilize carbon based materials without leaching them into the planetary atmosphere - our favorite waste dump as of 2019 - they are sequestered. If we use products obtained from atmospheric (or seawater or fresh water) carbon dioxide, we have removed carbon dioxide.

This is why this paper caught my eye, the caveat being that it is a lab scale process and is nowhere near pilot or industrial scale. (This is not a "we're saved" post.)

From the introduction to the paper:

In the near future, conventional solvents should be replaced by designer solvents to obey the 12 principles of Green Chemistry, introduced by Anastas and Warner.(1) In 2003 a class of designer solvents, called deep eutectic solvents (DESs), were reported that could obey these principles of Green Chemistry. The first DESs reported in the literature were composed of combinations of amides and choline chloride.(2) DESs consist of two or more components that liquify upon contact, which most likely is caused by entropy of mixing, hydrogen bonding and van der Waals interactions.(3,4) These physical interactions are supposed to induce a dramatic decrease in the melting temperature of the mixture, as opposed to the melting temperature of the pure components, by stabilizing the liquid configuration, inducing a liquid phase at room temperature.

DES research initially focused on hydrophilic DESs. In 2015 hydrophobic DESs were reported in the literature for the first time.(5,6) These were tested for the extraction of volatile fatty acids (VFAs) and biomolecules, such as caffeine and vanillin, from an aquatic environment.(5,6) Although the field of hydrophobic DESs is new, already quite some papers about their use were published. These include the removal of metal ions,(7−9) furfural and hydroxymethylfurfural by the use of membrane technology,(10) and pesticides from H2O.(11) Furthermore, hydrophobic DESs showed their potential for the capture of gases (CO2),(12,13) and their use for microextractions was investigated.(14,15) Moreover, the extraction of components from leaves using hydrophobic DESs was studied.(16,17)

The hydrophobic DESs currently presented in the literature are promising, especially application-wise, but several improvements are needed. The first improvement is the use of more natural components. In our first investigation on hydrophobic DESs quaternary ammonium salts were used,(6) that from an environmental point of view are not the best. It is the idea to overcome this by the use of natural components, terpenes. However, in the future also more detailed investigations on their sustainability and toxicity should be addressed with specific methods as stated in the literature,(18−20) even as these DESs based on natural components are generally accepted as environmentally friendly.(21,22)

Another improvement that we would like to introduce is testing the sustainability of these solvents from a chemical engineering point of view. If a hydrophobic solvent is too viscous or the density difference with water is too small phase separation will be difficult, which results in high energy demands. For ease of processability, the viscosity should be as low as possible, while the difference of the density between the DES and water should be as large as possible because a density difference enhances the macroscopic phase separation process to a large degree...

A eutectic is, of course, a mixture of two compounds that have a lower melting point than either of the pure compounds: The most familiar such eutectic is salt water, which is why we dump salt on our roads to maintain our car CULTure during ice and snow storms.

There are many eutectics known of various types. It's a fascinating area of study, and I love reading about eutectics.

A "deep eutectic" is a eutectic that has a temperature roughly at or considerably lower than "room temperature," 25C, while other eutectics exist at higher temperatures.

For example a eutectic forms between neptunium and plutonium, which melts at 570C when compared to the melting points of the pure metals, 638C for pure plutonium, and 640C for pure neptunium. Plutonium and iron form a eutectic that melts at 428 C, compared to the melting point of pure iron, which is 1538 C.

We could list thousands of examples of eutectics.

The eutectics here are all organic compounds, and all, more or less, are natural products or can be obtained from natural products in a facile fashion.

From the text:

The following components were used as DES constituents in this work: decanoic acid (DecA), dodecanoic acid (DodE), menthol (Men), thymol (Thy), 1-tetradecanol (1-tdc), 1,2-decanediol (1,2-dcd), 1-10-decanediol (1,10-dcd), cholesterol (Chol), trans-1,2-cyclohexanediol (1,2-chd), 1-napthol (1-Nap), atropine (Atr), tyramine (Tyr), tryptamine (tryp), lidocaine (Lid), cyclohexanecarboxyaldehyde (Chcd), caffeine (Caf) and coumarin (Cou). Some components were used as hydrogen bond donors (HBDs), while others were used as hydrogen bond acceptors (HBAs). A few of these components can both donate and accept hydrogen bonds. In the literature some of the combinations with lidocaine were previously presented in the literature as eutectic mixtures.(23−25) More recently a debate has started on the definition of DESs, specifically on the deepness in melting point depression, and models were developed for predicting their phase diagram.(26−32) Because there are still debates on the definition of DESs in the literature, for now we consider all the presented combinations as DESs.

For the purposes of their experiments, they evaluated the extraction of riboflavin (Vitamin B12) from water - a difficult extraction - using various deep eutectic solvents prepared from these compounds.

After rejecting some possible deep eutectics prepared from this list because they tended to crystallize on storage, some promising systems were evaluated for thermal stability.

The caption:

Figure 1. Thermograms of the DESs Men:Lid (2:1), DecA:Men (1:2), 1-tdc:Men (1:2), 1,2-dcd:Thy (1:2) and DecA:Men (1:1). The x-axis shows an increase in temperature [K], while the y-axis shows the loss in weight [%].

The caption:

Figure 2. Thermograms of the DESs Thy:Men (1:2), Thy:Men (1:1), Thy:Cou (1:1), Thy:Cou (2:1) The x-axis shows an increase in temperature [K], while the y-axis shows the loss in weight [%].

(The boiling point of water is 373K.)

The proton nuclear magnetic resonance (NMR) spectrum of one deep eutectic:

The caption:

Figure 3. 1H NMR of the DES Thy:Cou in a 2:1 molar ratio

A remark: Thymol is a natural product that is responsible for the pleasant taste and odor of thyme. It is worth noting that it is similar in structure to the dangerous fossil fuel derivative cumene, inasmuch as it is an isopropyl benzene, which is utilized industrially to make phenol and acetone (nail polish remover). It is certainly possible to synthesize thymol from dangerous fossil fuels, but this would be defeating the purpose of banning dangerous fossil fuels, even if it would reduce their overall toxicity. I am certainly no expert in thymol sourcing, but it's doubtful it could be obtained sustainably from thyme, since if you've ever grown a thyme plant, they're not all that bulky. A better route to thymol might, however be from the digestion and processing of lignins, the "other" constituent of wood (and grain plant stalks) besides cellulose.

This however, is research, not industrial practice.

The 13C NMR of the same DES:

The caption:

Figure 4. 13C NMR of the DES Thy:Cou in a 2:1 molar ratio.

The criteria for the viability of these deep eutectics is that they show low solubility in water, and that they have reasonable viscosities.

While some exhibit low solubility in water, water is not entirely insoluble in them. Thus it is important to understand whether they react with water and are degraded in the process. This is important for their sustainability with respect to reuse.

The following spectra shows that in this case, they are not:

The caption:

Figure 5. 1H NMR of the DES Thy:Cou in a 2:1 molar ratio after mixing with H2O.

The caption:

Figure 5. 1H NMR of the DES Thy:Cou in a 2:1 molar ratio after mixing with H2O.

The caption:

Figure 6. 13C NMR of the DES Thy:Cou in a 2:1 molar ratio after mixing with H2O.

A number of other physical traits are examined in the paper to identify promising mixtures.

From the conclusion of the paper:

In this work a series of new, hydrophobic DESs based on natural components were reported. From 507 combinations of two solid components, 17 became a liquid at room temperature, which were further assessed for their sustainability via four criteria. These criteria are based on the use of the hydrophobic DESs as extractants and include a viscosity below 100 mPa·s, a density that should be rather different than the density of the water phase (50 kg·m–3) a limited pH change of the water phase upon mixing with water and a low amount of DES that transfers to the water phase.

More than 10 DESs follow the viscosity criterion below 100 mPa·s. Regarding the density, the criterion was set at a density difference between the DES and water as large as possible (ρ ≥ 50 kg·m–3).

The hydrophobic DESs Deca:Men (1:1), DecA:Men (1:2), Men:Lid (2:1), Thy:Cou (2:1), Thy:Men (1:1), Thy:Cou (1:1), Thy:Men (1:2) and 1-tdc:Men (1:2) satisfy this criterion.

Furthermore, the criterion of a limited pH change (between 6 and 8) of the water phase coexisting with the DES showed that the hydrophobic DESs DecA:Lid (2:1), DecA:Atr (2:1), Thy:Cou (2:1), Thy:Men (1:1), Thy:Cou (1:1), Thy:Men (1:2), 1-tdc:Men (1:2) and Atr:Thy (1:2) have a negligible pH change. The amount of organics that transfers to the water phase was comparable for all developed hydrophobic DESs, except for DecA:Lid (2:1), DecA:Atr (2:1) and Atr:Thy (1:2), which had considerably higher TOC values.

In general, the newly developed DESs Thy:Cou (2:1), Thy:Men (1:1), Thy:Cou (1:1), Thy:Men (1:2) and 1-tdc:Men (1:2) satisfied all four criteria. Therefore, these hydrophobic DESs may be considered as relatively sustainable, hydrophobic designer solvents. These DESs were used for the removal of riboflavin from an aqueous environment. All new hydrophobic DESs showed moderate to high extraction yields. The highest extraction efficiency of riboflavin, 81.1%, was achieved with the hydrophobic DES DecA:Lid (2:1).

Cool paper I think.

I hope you're having a pleasant Sunday afternoon.

New Record Weekly High For CO2 Measurements at Mauna Loa.

2019 is shaping up to be a doozy of a year at the Mauna Loa carbon dioxide observatory.

I keep a spreadsheet of the weekly year-to-year measurements increases in the concentrations of CO2 measured there. The most recent measurement in the carbon dioxide in the planetary atmosphere, for the week beginning on February 10, 2019 is 412.41 ppm. This is the highest value ever recorded there. The previous high was 411.16, measured on the week beginning June 10, 2018.

This value, 412.41 is 3.86 ppm higher than the same week last year.

As of this writing, there are 2246 such measurements for weekly year-to-year increases of carbon dioxide increases recorded on the Mauna Loa CO2 observatory's website.

This week's measurement is the 26th highest of all time. This places it in the 98.8th percentile.

Of the 50 highest such measurements, 33 have taken place in the last 5 years, 36 is the last 10 years, and 39 in this century.

Of the 50 highest measurements, 3 have been recorded in 2019, the last measurement having been the 6th such measurement of this young year. We're just getting started: On 46 more measurements to go.

In the last ten years, humanity as a whole has "invested" - my word would be "squandered" - more than two trillion dollars on two forms of so called "renewable energy," specifically solar and wind.

This information is here, in the UNEP Frankfurt School Report, issued each year: Global Trends In Renewable Energy Investment, 2018

It's having an effect, and it's written in the planetary atmosphere.

As for the "astounding growth" of so called "renewable energy" which is often described as "cheap," the following data tells another story:

In this century, world energy demand grew by 164.83 exajoules to 584.95 exajoules.

In this century, world gas demand grew by 43.38 exajoules to 130.08 exajoules.

In this century, the use of petroleum grew by 32.03 exajoules to 185.68 exajoules.

In this century, the use of coal grew by 60.25 exajoules to 157.01 exajoules.

In this century, the solar, wind, geothermal, and tidal energy on which people so cheerfully have bet the entire planetary atmosphere, stealing the future from all future generations, grew by 8.12 exajoules to 10.63 exajoules.

2018 Edition of the World Energy Outlook Table 1.1 Page 38 (I have converted MTOE in the original table to the SI unit exajoules in this text.)

After Fukushima, the world decided that nuclear energy was "too dangerous." Of the 20,000 people killed by the earthquake that destroyed three nuclear reactors at Fukushima, almost everyone of them was killed by seawater. Very few, if any, people died from radiation.

Seven million people die each year around the world from air pollution, almost all of it caused by burning dangerous fossil fuels and biomass.

In the United States, which operates (still) the most nuclear reactors in the world, nuclear plants are being shut and replaced by dangerous natural gas plants, because, gas is "cheap," at least if you don't give a rat's ass about climate change. (Most people don't, really).

Nuclear plants release about 25 grams of CO2/kwh in order to operate, almost all of this release coming from electrical energy utilized to make the fuel. Dangerous natural gas plants release between 500 and 600 grams of CO2/kwh.

Don't worry. Be happy. Climate change isn't your problem. It's the problem of every generation that comes after us. We. Couldn't. Care. Less.

Since the Fukushima event, the average weekly year-to-year increases in carbon dioxide have been 2.35 ppm per week.

In the 21st century, the average weekly year to increases in carbon dioxide have been 2.12 ppm per week.

In the 20th century, these same averages were 1.54 ppm per week.

We're doing great. Elon Musk. Tesla Car. Solar City. Rah. Rah. Rah.

Have a pleasant Sunday.

Glowing Wounds at Shiloh, An Interesting Tale From the Westinghouse/Intel Science Awards.

I'm not too much into that concept of "gifted children" and although I wasn't a regular listener to "Prairie Home Companion" I always got a laugh out of Garrison Keillor's continuously repeated little joke about "Lake Woebegon's" children all being "above average."

I've seen a lot of children ruined by being declared "gifted." It happened to me, and though I eventually recovered, a part of my life was wasted.

I have a relative who ruined her son's life by carrying on about how "gifted" he was, and he was in his thirties before he held a job or finished college. The same relative has convinced her daughter to pull her first grader, her granddaughter now, out of public school, because she's "too gifted" to be among "mere first graders."

Some people never learn.

It sucks. It really sucks.

I sent my sons to a public school. I let them decide what classes in which they would feel comfortable. One of them decided not to take "honors classes" in 9th grade and nevertheless was admitted to a damned good university with 30 college credits by the time he finished high school. He never thought he was smarter than anyone else; in fact, he had the good fortune to think he was rather ordinary. The other son was dyslexic, and got treated like shit in school, put in with the "slow kids" but stuck it out, went on to do his thing, and will graduate this spring with a 4.0 from a very good art school.

Neither of my sons were declared, "gifted," by me or my wife, and if anyone outside our family started in with any of that crap, we cut them off.

The "gifted" meme exists nonetheless. And apparently there's a journal devoted to "gifted children..."

I'm going through some old files I downloaded at the library but never actually read, and came upon a directory I called "Shiloh" back in 2016.

I couldn't imagine what caught my eye back then, so I just had to look.

"Gifted" or not, the story herein, from a journal devoted to the "gifted" is compelling; being about a kid classified as having a "specific learning disability..." but won a prestigious Intel Science Award anyway.


Talent Development in Science: A Unique Tale of One Student’s Journey

The full paper at this link is available, but I'll quote from it anyway. I have the full article in my file, on which apparently I somehow stumbled back in 2016, how, I can't recall.

From the open introduction:

Science fairs have long been the showcase of gifted students across the United States. The following story describes the path of one student as he developed a project that eventually won the Intel International Science and Engineering Fair (Intel ISEF).

Perhaps the most extraordinary aspect of this case is that this gifted student was atypical in numerous respects in his pursuit to win this prestigious competition. First, he had been identified years earlier with a specific learning disability. He also suffered from bouts of depression and experienced social isolation. Not surprisingly, he was unmotivated. Finally, he did not like school. The typical response to this type of student would include medication, social skill instruction, and remediation. Instead, his parents firmly believed that more was to be gained by accentuating the positives, so they encouraged him to pursue his passions and follow his dreams...

It was his parents, they knew what to do...

...To understand the uniqueness of this triumph, we need to explore how Bill was able to accomplish this feat despite his disabilities and school difficulties. A twice-exceptional learner in school, Bill was plagued throughout his school career by mild depression, as well as learning and attention deficits. School was not always an ideal environment for him. Bill was diagnosed as learning disabled in 7th grade when the school system finally acknowledged that there was a 2-year discrepancy between his ability and performance. But, Bill’s problems had surfaced as early as preschool. Poor peer relations, inappropriate social behaviors, and a reluctance to complete written assignments punctuated his early childhood years...

...The pupil personnel team thought Bill was just lazy and recommended remediation. His parents had him tested privately. His scores on the various WISC subtests ranged from the 4th to the 99th percentile. He was diagnosed as depressed, and medication was recommended. His parents objected and instead insisted that the source of the depression be the focus of attention. To this end, Bill transferred to a school with a gifted education program in which he participated and, in addition, received support in organization and learning strategies. Bill regained some success in this setting...

Been there, done that, school officials "recommending medication..."

Um, in our case there is no medication for dyslexia, and the training of school officials is mostly not in medicine.

So it turns out that Bill had an interest in Civil War history, and in the 4th grade, after reading about how soldiers in the Civil War had sometimes boiled used bandages because they had no replacements - even though sterilization was unknown - Bill won a high school award for a study of Civil War sterilization techniques, which were apparently used because of experience and observation, not any scientific understanding of the pathology of infection or any knowledge of microbiology:

Although both of Bill’s parents had a background in science, Bill did not seem to share their enthusiasm for it. In fact, he needed to be coaxed to achieve in his science classes at all. A notable exception, however, was the middle school science curriculum, which included opportunities for students to conduct original research projects and enter local science fair competitions. Bill’s first entry during middle school tapped his knowledge about an event that had occurred during the Civil War.

Bill had learned that, after one long battle, a battalion had exhausted its supply of bandages. To address this problem the medical corps decided to reuse the soiled bandages by first boiling them. Motivated by hearing this story, Bill generated a project describing the sterilization techniques used in the Civil War. This project won him first place in a competition for his school. Reinforced by this success, Bill began to understand that there are historic connections to scientific discoveries and that his interest in and knowledge of history could serve as an entry point for science investigations. Indeed, the internationally award-winning project was his fourth involving the Civil War

By tenth grade, Bill finally made a close friend:

During his sophomore year, in fact, Bill failed the standard (traditional) biology course, but convinced authorities that he could enroll in an AP course during the summer at a local college. He excelled in this 6-hour-a-day class and received an A for the course. “I hated the way biology was taught in my school. It was mostly listening to a lecture and writing tests and papers,” Bill explained. “In the AP course we had lab every day, and during the lecture we discussed what happened in the lab. I took the AP exam the next spring and scored a 4. I would have gotten a 5, but I was tired when I got to the essay, as it was my second exam of the day. I was amazed how well I did since I did very little review.”

Bill remained unenthusiastic about his school’s science class until he met John, who happened to be in the same chemistry class. Bill said, “John is my polar opposite. That is why we complement each other well. We liked each other right away. He could keep up with my jokes and me. He is quick-witted like I am and also a smart-aleck.” Chemistry was fun with John in the class, and Bill received an A for the course, but had no interest in entering a competition that year…

Bill read a report about wounds at Shiloh, the first Civil War battle do result in massive casualties, glowing. His friend John, unlike Bill, had a strong interest in entering what was then known as the "Westinghouse Science Award" competition for high school kids.

According to oral history, injured soldiers were observed to have glowing wounds. It is important to remember that, in the 1860s, sanitation and sterile surgery techniques were not well known or practiced. Many soldiers at that time survived their wounds initially, only to die of secondary staph infections or face amputation due to gangrene infection. According to the story, those soldiers who exhibited glowing wounds survived their wounds more often than the casualties whose wounds did not glow. When Bill heard this tale, he passed it on to John, and the two of them then explored the possibilities of investigating this intriguing phenomenon.

The kids did some research, and they speculated that the glowing bandages may have involved luminescent bacteria.

The boys began to investigate the feasibility of discovering whether this type of bacterium could be the source of the glowing wounds of the Shiloh story. Preliminary research revealed important information about the conditions existing at Shiloh that could explain the presence of these bacteria. The Battle of Shiloh was fought on a flood plain during a cool, wet spring—perfect conditions for nematodes and P. luminescens, which the nematodes carry. The soldiers were constantly struggling in the mud, and, in many cases, the wounded were left in the cool dampness of the mud for several hours. These wounded soldiers quickly developed hypothermia, which, again, would provide the perfect environment for growth of these bacteria. The P. luminescens does not grow well at body temperature, but if body temperature drops a few degrees, as in the case of hypothermia, the bacterium reproduces rapidly.

Long story, short: Working with tools from Bill's mother's lab, the boys prepared a lot of bacterial cultures with various media to simulate wounds, proved their hypothesis about the luminescent bacteria (which are, by the way, "good bacteria" inasmuch as they produce antibiotics, and entered the contest:

The judges of the two major competitions—Siemens Westinghouse and Intel ISEF—each viewed with much admiration Bill and John’s PowerPoint presentation and display of their study. Moreover, these young scientists impressed the judges sufficiently to come away with first place in the Intel ISEF Competition in 2001 and second place in the Siemens Westinghouse Competition.

Winning these awards encouraged the boys to continue their research. They would like to test the soil at Shiloh to further confirm their hypotheses. Furthermore, they are interested in learning more about the healing potential of P. luminescens bacteria. Given the persistence that has characterized their work thus far, they will very likely make the time and find the resources to continue their unique collaboration.

Cool story. I probably picked the paper because I am personally interested in the history of the American Civil War, a war caused in part, or at least triggered by, the incompetence of a President, James Buchanan, who was generally regarded by historians at the worst President in US history, at least until Trump came along.

Have a great weekend.

Some Interesting Details of How the Hubble Space Telescope Was Used to Discover Neptune's 7th Moon.

The paper I'll discuss in this brief post is this one: The seventh inner moon of Neptune (M. R. Showalter, I. de Pater, J. J. Lissauer & R. S. French, Nature 566, pages 350–353 (2019))

One of the greatest inventions of my overly long lifetime has been the CCD (charge coupled device) camera that has made it possible to convert light into digital data.

My own most immediate experience of this kind of device concerns immunogenicity testing, wherein a person's immune response to a protein therapeutic drug can either inactivate the drug, or even worse, cause a strong anaphylactic - at times life threatening - shock. In these cases the body's immune system generates antibodies to the drug, "anti-drug antibodies" and to detect these, one (proprietary) technology uses some nano/micro technology to affix copies of the drug onto a carbon surface, followed by treatment with the patient's blood, serum or plasma. The biological fluid is washed off, and the antibodies, developed to attach to the drug itself, attach. Antibodies have a "Y" shape, and attach to antigens at either arm of the "Y" meaning that if they attach to the bound drug, one arm (generally) remains free. In the next step, the drug, this time attached to a chemical species complexing a ruthenium atom, binds to the other arm of the antibody, whereupon, the application of an electrical charge causes the ruthenium to emit a tiny amount of light, probably not visible to the naked eye, but available to be detected by a CCD camera. This is a key technology in saving human lives.

One may ask what this has to do with the moons of Neptune, but in a remote sense it does have a connection, since the Hubble Telescope is chock full of CCD cameras, and the spec of light reflected off of Neptune's seventh moon, Hippocamp.

What is cool is that the detection of Neptune's seventh moon depended not just on optical detection, but on the processing of the digital data on which the tiny optical signal was abstracted from the "noise" light of the universe.

From the introductory text:

We have devoted three Hubble Space Telescope (HST) observing programmes to studies of the rings, ring arcs and small inner moons of Neptune. We used the High Resolution Channel (HRC) of the Advanced Camera for Surveys (ACS) in 2004–2005 and the Ultraviolet/Visual Imager (UVIS) of Wide Field Channel 3 (WFC3) in 2009 and 2016. Hippocamp, also designated4 as S/2004 N 1 and Neptune XIV, was discovered during a reanalysis of the first two datasets (Fig. 1a–c) and confirmed in the third (Fig. 1d).

Here is the figure to which the text refers:

Here is the caption:

a, View from Visit 04 of programme GO-10398, showing the earliest detection of Hippocamp, on 2004 December 9. Neptune is behind the HRC occulting mask. b, Visit 08 of programme GO-10398, on 2005 May 12. c, View from the first orbit during Visit 01 of programme GO-11656, on 2009 August 19. The grey vertical band is due to Neptune’s saturation bloom, in which the heavily saturated pixels of the charge-coupled device tend to saturate adjacent pixels above and below. d, Visit 03 of programme GO-14217, on 2016 September 2. Panels a and b have been rotated 90° anticlockwise. In each panel a small square locates Hippocamp, and a close-up is shown in the inset. Other moons and the outline of Neptune are indicated.

Some more text:

The long delay between the first image acquisition and the discovery of Hippocamp arose because of the specialized image processing techniques required. To detect a small moon in an image, motion smear should be limited to the scale of the point-spread function. For Neptune’s inner system, this limits exposure times to 200–300 s before smear dominates and the signal-to-noise ratio (SNR) ceases to grow. We have developed an image processing technique to push integration times well beyond this limit. Although the moons of Neptune move rapidly across the detector, that motion is predictable and can be described by a distortion model. Our procedure involves deriving a pair of functions r(x) and θ (x) that return the orbital radius and inertial longitude, respectively, as a function of the two-dimensional (2D) pixel coordinate x. The inverse function x (r, θ ) can also be readily defined. We derive the mean motion function n (r) from Neptune’s gravity field, including its higher moments5 J2 and J4. One can use these functions to transform an image taken at time t0 to match the appearance of another image obtained at time t1 by relocating each pixel x 0 in the original image to a new location x1:

x1=x (r(x0),θ (x0)+n(r(x o))(t1−t0))

(I had to suppress the conversion of equation notation into smiley's here, and the editor here no longer allows subscripts for some reason, following the election day attack on DU in elections Putin's orange nightmare puppy was installed in our White House. Imagine where the subscripts go.)

The next figure shows the image processing steps:

The caption:

a, Image ib2e02ziq_flt, the first in a sequence of eight long-exposure images from the second HST orbit of Visit 02 in programme GO-11656 (2009 August 19). b, Image ib2e02zmq_flt, taken 21 min later. Despina, Galatea and Larissa have shifted noticeably in position. c, Image from a, transformed to match the geometry of the image in b. d, The result of co-adding all eight images, revealing Hippocamp and Thalassa. The outline of Neptune’s disk, as distorted by the camera, is shown in each panel.

A little more cool stuff on the imagining procedure:

Although we were able to control Neptune’s saturation using the methods described above, glare from Neptune was ever-present and, as with all long exposures on HST, cosmic rays created a smattering of ‘snow’ atop most images (Extended Data Fig. 5a). Hot pixels fall at known locations in each image and are catalogued for each detector. Cosmic-ray hits were recognized as clusters of pixels in one image that differ by more than three standard deviations from the median of identical exposures from the same HST orbit. For cosmetic purposes, we overwrote these pixels with the median of the adjacent pixels (Extended Data Fig. 5b). However, we also kept track of overwritten pixels using a boolean mask and ensured that masked pixels were ignored in the subsequent data analysis (Extended Data Fig. 5c). We suppressed the glare and diffraction spikes by aligning the centre of Neptune in all the images from each HST visit that shared a common filter. We constructed a background image from the median value of all the pixels after aligning on the centre of Neptune. Unlike the mean, the median is not affected by moons (which move rapidly) or cosmic-ray hits (which are transient). The resulting images were therefore a smooth representation of Neptune’s glare and diffraction spikes. Subtracting the backgrounds yielded individual images that were almost free of distracting gradients (Extended Data Fig. 5d).

This process involves mathematical modeling of the orbital parameters, and the author's check the viability of this procedure using the previously discovered moons of Neptune, which are more or less satisfactory:

All orbits are in good agreement for Despina, Galatea, Larissa and Proteus. Naiad’s orbit agrees with the Voyager-era solution7 if one increases its mean motion by 1σ; the 2004 solution6 disagrees with this work because it includes an erroneous measurement. We also note that the orbit solutions for Thalassa appear to be diverging, although all solutions agree at the Voyager epoch.

The authors comment on the possibility of other moons:

The Voyager images established an upper limit of about 5 km on the radius of any undiscovered moons1 (assuming k = 0.09). That search was complete inside r = 65,000 km and partially complete inside 90,000 km. Between the limits of the Voyager search and the orbit of Proteus, we can now rule out any moons that are half as bright as Hippocamp, which corresponds to R ≈ 12 km. Beyond Proteus, our images are freer from Neptune’s glare and orbital motion is slower, making it possible to co-add larger sets of images (Extended Data Fig. 3).

The full paper contains some interesting commentary on the history of Neptune and its moons.

From the extended data, a graphic showing the "recovery" of Naiad, the first image on that moon since the Voyager flyby:

a, b, Portions of an HST image after processing and co-adding as described in the text. The location of Naiad in each panel is indicated by a small square; close-ups are shown in the upper-right insets. The outline of Neptune’s disk is indicated by a blue ellipse. a, View from Visit 01, orbit 1 of HST programme GO-11656, obtained on 2009 August 19. The image shows the first unambiguous detection of Naiad since the 1989 Voyager flyby of Neptune. b, View from Visit 08, orbit 2 of programme GO-14217, taken on 2016 September 2.

Wonderful science I think.

It is, albeit, an artifact of another age, a time when our country could do things like build the Hubble Space Telescope, before a fool set out to destroy this country aided by sick little racists and traitors.

It is strange to be surrounded by so much ignorance at the precise time when humanity has extended it's vision to the most incredibly small dimensions, and the most incredibly vast dimensions, literally, quite literally across the universe.

How odd it is that this is so that we can see so far out and so far in and so much in between still, even in a country ruled by a lump of mindless orange lipids with it's greasy, greedy, plasticine eyes fixated on its own ugly porcine navel and oblivious of the ugliness of its little nerves and oozing petty bigotry.

Enough of pain, enjoy the new moon.

Have a pleasant weekend.

This "percent talk" is obscene.

It is obscene because energy demand worldwide is rising, and the fastest rising source of energy on this planet is not wind, nor solar.

It's natural gas, which grew in 2017 - the last year for which we have comprehensive data - by a factor of 4 greater than wind, solar, geothermal, tidal combined.

Worldwide, the solar, wind, geothermal and tidal industry grew 1/7 the rate of worldwide energy demand in 2017.

Combined this trash technologies, wind, solar, geothermal and tidal combined didn't grow as fast as petroleum.

In this century, world energy demand grew by 164.83 exajoules to 584.95 exajoules.

In this century, world gas demand grew by 43.38 exajoules to 130.08 exajoules.

In this century, the use of petroleum grew by 32.03 exajoules to 185.68 exajoules.

In this century, the use of coal grew by 60.25 exajoules to 157.01 exajoules.

The solar, wind, geothermal, and tidal energy on which people so cheerfully have bet the entire planetary atmosphere, stealing the future from all future generations, grew by 8.12 exajoules to 10.63 exajoules.

10.63 exajoules is under 2% of the world energy demand.

2018 Edition of the World Energy Outlook Table 1.1 Page 38 (I have converted MTOE in the original table to the SI unit exajoules in this text.)

We're at 412 ppm of carbon dioxide. Do we give a shit? Do we care?

I really question when people are going to abandon this obscene percent talk and wake up.

Before being subject to all kinds of unjustified selective attention with respect to risks, the nuclear industry grew to 28.8 exajoules in less than 20 years, led by the United States, which built more than 100 reactors while producing the lowest priced electricity in the world.

It is, what it has always been, a gift by the finest minds of the 20th century to an increasingly ignorant generation that somehow has convinced itself that only nuclear energy need be perfect or other forms of energy can suck money and human lives without restriction.

The fact is that if wind energy were clean - it's not because steel, aluminum, plastics, carbon fibers, and environmentally the most questionable, lanthanides are all carbon intensive materials - it would still be incapable of meeting the increases in world wide energy demand, not the totals, just the increases.

Concrete, a giant feature of this offshoire tragedy in Britain and elsewhere is also a huge contributor to climate change..

I have analyzed in this space, the lifetime of wind turbines in that offshore oil and gas drilling hellhole, Denmark. It's about 18 years on average. In less than 20 years many of the world's wind turbines will need replacement, and the garbage the old ones have become will need to be hauled away.

After the combustion of dangerous fossil fuels for cars, heating, power generation, the two material costs, steel and concrete are the largest contributors to climate change, steel at well over a billion tons out of the rising 35 billion tons we dump on future generations each year, concrete another billion or so.

Thus the low energy to mass ratio connected with the wind industry means it's a rather dirty industry, even if one chooses to ignore, as everyone does - it's baleful impact on the avian biosphere.

Given that after decade, after decade after tons and tons of "percent talk" about wind and solar things are getting worse, not better we really should rethink our dogma.

Reality may suck, but it is reality.
Go to Page: 1 2 Next »