HomeLatest ThreadsGreatest ThreadsForums & GroupsMy SubscriptionsMy Posts
DU Home » Latest Threads » NNadir » Journal
Page: 1 2 3 Next »


Profile Information

Gender: Male
Current location: New Jersey
Member since: 2002
Number of posts: 28,208

Journal Archives

Polymers with controlled assembly and rigidity made with click-functional peptide bundles

The paper I'll discuss in this post is this one: Polymers with controlled assembly and rigidity made with click-functional peptide bundles, (Pochan et al, Nature 574, 658–662 (2019)).

I had a friend and colleague once who left his job because his company was telling him to make peptides on an industrial scale (ton quantities). He told me, offending me slightly, that he wanted to do "something other than dehydration reactions" that is remove water to make chemical bonds.

Those of us who are environmentalists complain, quite justifiably I think, about polymers, because single use plastics (and to a lesser extent multiple use plastics) are fouling the seas, land, and living systems at an increasing rate. Nevertheless in a very real sense, you are a polymer, or better put, a collection of polymers, since almost all of the molecules of which you are made are polymers.

There was a point in my career that I was a peptide chemist, and trust me, the chemistry of peptides (and their synthesis) is considerably more complex than simply removing water, with all due respect my friend's outstanding knowledge of our science. I will tell you that I spent a year of my life dealing with the huge difference between the behavior of aspartic acid and glutamic acid, two natural amino acids that have exactly the same functional groups and differ by glutamic acid being one methylene group longer than aspartic acid.

Anyway this paper caught my eye, both as a one time peptide chemist and as a person very interested in the sequestration of carbon in useful and environmentally more benign polymers.

From the abstract of the paper:

The engineering of biological molecules is a key concept in the design of highly functional, sophisticated soft materials. Biomolecules exhibit a wide range of functions and structures, including chemical recognition (of enzyme substrates or adhesive ligands1, for instance), exquisite nanostructures (composed of peptides2, proteins3 or nucleic acids4), and unusual mechanical properties (such as silk-like strength3, stiffness5, viscoelasticity6 and resiliency7). Here we combine the computational design of physical (noncovalent) interactions with pathway-dependent, hierarchical ‘click’ covalent assembly to produce hybrid synthetic peptide-based polymers. The nanometre-scale monomeric units of these polymers are homotetrameric, α-helical bundles of low-molecular-weight peptides. These bundled monomers, or ‘bundlemers’, can be designed to provide complete control of the stability, size and spatial display of chemical functionalities. The protein-like structure of the bundle allows precise positioning of covalent linkages between the ends of distinct bundlemers, resulting in polymers with interesting and controllable physical characteristics, such as rigid rods, semiflexible or kinked chains, and thermally responsive hydrogel networks. Chain stiffness can be controlled by varying only the linkage. Furthermore, by controlling the amino acid sequence along the bundlemer periphery, we use specific amino acid side chains, including non-natural ‘click’ chemistry functionalities, to conjugate moieties into a desired pattern, enabling the creation of a wide variety of hybrid nanomaterials.

"Click Chemistry" is chemistry, generally organic chemistry, that involves chemical reactions that take place very fast and in quantitative or nearly quantitative yields under easily accessible conditions. "Click" reactions represent only a small subset of known chemical reactions, but they are very important.

From the papers introduction:

Our bundlemer-based polymer chains exhibit a variety of unique features. Unlike high-molecular-weight synthetic polymers, our chains use small (roughly 3 kDa), easily synthesized peptide sequences that fold into designed tetrameric 4-nanometre bundles. The subsequent covalent assembly of these bundles yields polymers with micrometre-scale contour lengths. The design of α-helical homo-oligomers has a long history, with both empirical de novo8 and computational9 methods being used. Here, computationally designed homotetrameric bundles with D2 symmetry10 present two reactive groups at each end, owing to chemical functionalization of the amino termini of the constituent peptides (Fig. 1a). Distinct homotetrameric bundles with complementary reactive functional groups are chemically linked (or ‘clicked’ together) to produce bundlemer chains.

A graphic with some images of these polymers along with a cartoon describing an example of the "click chemistry" utilized in assembling these polymers:

The caption:

a, Left, peptides 1 and 2 (Extended Data Fig. 1), shown in single-letter amino acid code, have at their N termini (blue) either maleimide (Mal) or cysteine (C). The carboxyl terminus (red) of each peptide is unreactive. Each sequence forms homotetrameric bundlemers: grey, peptide 1; white, peptide 2. Centre, the thiol–maleimide click reaction yields chains with two covalent linkages between neighbouring bundlemers. b, TEM of rigid rods produced with a 1/1 ratio of peptides 1 and 2. The sample is negatively stained with phosphotungstic acid (PTA). c, CryoTEM of rigid rods longer than 1 μm in aqueous solution. d, Negatively stained TEM of short rigid-rod chains produced using an asymmetric ratio (10/9: [peptide 1]/[peptide 2]) of reacting bundlemers. e, The organic tetrathiol PETMP (black wavy lines) links peptide-1 bundlemers to form semiflexible chains. f, Examples of segmented chains produced by connecting short rigid rods with PETMP. Rod segments within the segmented polymers range in length from approximately 50 nm (where n, the number of bundlemers per segment, is approximately 3 to 4) to 100 nm (where n is approximately 8 to 9).

In this case the "click chemistry" involves the side chain of a very important amino acid, cysteine, which features a sulfhydryl side chain. (This amino acid is often involved in complexation of metals by metalloproteins, and is a frequently a critical feature of their catalytic sites. The affinity of mercury and cadmium for these thiols in lieu of the zinc with which they are supposed to function is a key factor in the toxicology of these two metals.)

The behavior of some of these polymers as liquid crystals:

The caption:

Rods were prepared as in Fig. 1a, d with alternating bundlemers of peptide 1 and peptide 2. a, Polarized optical microscopy (POM) of a pseudoisotropic region of roughly 100-nm-long rods with multiple TFCDs, indicative of a lyotropic lamellar phase. POM was performed on an 8% (w/v) short rigid-rod solution at pH 2. b, TEM of negatively stained short rigid rods from the pseudoisotropic region shown in a, revealing clear rod layering. c, TEM reveals the structure of dilute regions in which rigid rods have locally aggregated into droplets with clear rod orientation. d, Bottom, diagram of a TFCD cross-section formed in smectic-A-type liquid crystals. Top, enlargement of a single smectic layer, showing the proposed homeotropic alignment of individual rigid rods. The blue dashed lines represent boundaries between smectic layers confined between parallel walls (thick black lines represent the glass slide and cover slip in the POM). The liquid-crystal director n, the axis along which all rods are aligned within individual layers, is perpendicular to the smectic layers. The local orientation director (grey arrows) within the smectic A layers is parallel to n far from the TFCD. In the vicinity of a topological defect on the glass substrate (yellow), the local orientation field folds towards the defect.

Some interesting reversible behavior of some of these polymers:

The caption:

a, Rigid rods were created using fluorescently labelled variants (peptide 3 (right) and peptide 4 (left); Extended Data Fig. 1), each containing either 4-chloro-7-nitrobenzofurazan (green) or 5(6)-carboxy-tetramethylrhodamine (red) attached to the lysine-24 side chain. Bundlemers of peptide 2 (centre, white) were used to form short rigid rods comprising a single dye type. The resulting red and green rods were joined with peptide-1 bundlemers (grey) to make longer rods with red and green segments. The STORM images below are of resulting longer rigid rods. The constituent red or green fluorescence of each segment is easily resolved. b, Rigid rods from a are heated to 90 °C, resulting in unfolding and dissociation of the individual bundles while peptide dimers remain covalently linked. When the solution is then cooled to 20 °C, the bundlemers and rigid rods reform. c, Reassembled rods now display co-localization of green and red fluorescence (producing a yellow signal when the green and red channels are displayed concurrently) along the entire reformed rod lengths.

Some other interesting properties suggesting hybrid material options:

The caption:

a, AFM image of rigid rods formed using peptides 2 and 6 (Extended Data Fig. 1), with azide-functionalized PEG2000 chains conjugated to the rigid rods. b, AFM image of the rigid-rod area within the white outline in a; the area in the green rectangle was used for height analysis along the rod longitudinal axis (d). c, Diagram illustrating bundles of peptide 6 (grey) and peptide 2 (white) conjugated with PEG2000. d, Height trace along the longitudinal axis in b. e, Left, maleimide-functionalized gold nanoparticles are conjugated with peptide 7 (Extended Data Fig. 1), and then allowed to assemble into hybrid nanoparticle–bundlemer chains (right). f, TEM of nanoparticle–bundlemer chains. g, Magnified TEM images of the indicated nanoparticle–bundlemer chains in f reveal interparticle separation consistent with the dimensions of peptide bundles.

Here the co-ordinated nanoparticles are gold, but basically pretty much all of the metallic portions of the periodic table might prove accessible to such controlled polymers, offering the possibility of separations with very high distribution coefficients, useful for recovering dilute materials and also for remediating polluted sites.

Have a nice day tomorrow.

Faradaic electro-swing reactive adsorption for CO2 capture.

The paper I'll discuss in this post is this one: Faradaic electro-swing reactive adsorption for CO2 capture (Sahag Voskian and T. Alan Hatton Energy Environ. Sci., 2019, Advance Article Accessed 10/30/19).

I came across reference to this paper, out of MIT, in the scientific popular press, actually in several places, and since the quality of journalism describing science is often quite bad, decided to access the original paper.

The paper is, happily open sourced, and anyone can read it. I'll excerpt in offer the graphics in any case.

I've spent a lot of time reading about separations of carbon dioxide from various matrices and I will say that this one is somewhat unique, an electrochemical approach. Since the separation of carbon dioxide, a low energy gas, from dilute matrices requires overcoming entropy, this process is not energy neutral by any means; it costs energy, but it may be more efficient. It does not seem operative at air concentrations of CO2, requiring concentrations of 0.6% as compared to 0.041% in air (as of this writing).

The system uses a rather well known organic redox system, dibenzoquinones/dibenzoyhydroquinone in a very creative way

From the introductory text:

With the alarming increase in the atmospheric concentration of carbon dioxide (CO2) and its implications for global climate pattern developments,1,2 mitigation of climate change through curtailment of anthropogenic CO2 emissions has been one of the most urgent socioeconomic and scientific problems in the global arena over the last decade.3 To this end, a number of technologies have been developed for the large-scale capture of CO2 from combustion and other industrial processes to produce high-purity CO2 streams for storage or valorization.4 The most mature of these technologies are solvent scrubbing, mainly amine scrubbing,5 and oxyfuel combustion,6 which target high CO2 concentration streams (>10%). These approaches have a large footprint and, when retrofitted to a process, can require major modifications to the plant. Consequently, there has been a major effort to develop new materials and processes for high efficiency CO2-capture, including sorbents for pressure and temperature swing adsorption systems,7 and membranes for selective transport of the CO2.8 Furthermore, many potential applications of carbon capture require compact devices due to space limitations, such as in the direct capture of CO2 from tailpipe exhausts on board mobile sources, in which there is growing interest given the large contribution of transportation exhaust to greenhouse gas emissions (33.5% of U.S. CO2 emissions in 2016).9

In addition to the capture of CO2 from direct combustion processes, there is a need to remove CO2 from enclosed spaces for ventilation purposes in buildings and car cabins, or for cabin environmental control systems on board spacecraft and submarines, where the maximum allowed CO2 concentration in habitable spaces is 5000 ppm (or 0.5%).10 The first of such systems was developed by Winnick et al., for the electrochemical capture of CO2 in spacecraft cabins using molten carbonates.11 However, the low concentration of CO2 in such applications poses a challenge, mainly due to the low driving forces for mass transfer and the large quantities of other species present in air in addition to CO2.12 Thus, carbon capture is a multi-scale problem, where the CO2-rich streams to be treated vary greatly in volume, concentration and composition, and different criteria need to be fulfilled to ensure optimal processing depending on whether sources are industrial or small-scale (e.g., power plants or oil and gas heaters), concentrated or dilute (exhausts from combustion or air in confined spaces), and clean or contaminated with other pollutants.

Many of the CO2-capture chemical processes that involve a capture agent such as amines or solid sorbents require temperature and/or pressure swings to release the captured CO2 and regenerate the agents for further capture. These swings result in inefficiencies due to energy wasted in heating solvents and sorbents, pressurizing feed gas, or drawing a vacuum for desorption. Electrochemical systems can minimize such parasitic energy losses as they can be operated at near isothermal conditions, with significantly higher efficiencies than their thermal-swing (TSA) and pressure-swing (PSA) adsorption counterparts.13 One mode of electrochemical capture of CO2 is through the use of a redox-active carrier.

Electrochemically mediated selective transport of chemical species was first reported by Ward et al.,14 where a redox-active carrier (ferrous ion) was used to transport nitric oxide across a membrane. Since then, a number of systems have been developed for transporting chemical species by redox-active carriers that are activated at one electrode, to bind with the target species, and deactivated at the opposite electrode, to release the target and regenerate the carrier.15,16 Systems that have been proposed for the concentration of CO2 through this approach have been based on a number of different carrier molecules, such as quinones,17–20 4,4′-bipyridine,21 and thiolates.22,23 Quinones are of particular interest to this work for their superior electrochemical performance, serving as redox-active carriers for CO2 in electrochemically mediated separation processes. DuBois et al. demonstrated this possibility, and studied the thermodynamics of an electrochemical CO2 pumping system that utilizes quinones.18 More work followed, where Scovazzo et al. demonstrated the electrochemical separation of CO2 from <1% concentration gas mixtures using 2,6-di-tert-butyl-1,4-benzoquinone as a carrier in ionic liquid (IL) and organic solvent electrolytes media,19 while Gurkan et al. screened a number of ILs to serve as suitable electrolytes for quinone carriers in an electrochemically mediated selective transport system for CO2.20 All of these systems, however, require the transport of the electrolyte and the dissolved carrier molecules between two electrodes in an electrochemical cell for capture and release of CO2. This limits their implementation in a number of applications where the requirement for flow systems and pumping, and the large footprint, are problematic.

The quinones are oxidized and reduced by a porous matrix carbon nanotube (CNT) supported ferrocene polymer.

This nice graphic cartoon shows the systems operation.

The caption:

Fig. 1 Schematic of a single electro-swing adsorption electrochemical cell with porous electrodes and electrolyte separators. The outer electrodes, coated with poly-1,4-anthraquinone composite, can capture CO2 on application of a reducing potential via carboxylation of quinone, and release the CO2 on reversal of the polarity. The inner polyvinylferrocene-containing electrode serves as an electron source and sink for the quinone reduction and oxidation, respectively.

This system is designed to treat flue gases, but may be adapted to other types of systems. Recently I've been thinking quite a bit about carbon dioxide as a working fluid for Brayton type devices, and in particular have been focusing attention on a cycle with which I was not familiar until recently, the Allam cycle.

It is a closed cycle, where the combustion gas is also the working fluid.

The Allam cycle is designed primarily for use with dangerous natural gas, but I would imagine that it could also be adapted to other systems, notably those derived from biomass.

During the Allam cycle, portions of the carbon dioxide working fluid are removed from the system, commonly described as being for the purpose of "storage," but coupled with nuclear primary energy, could be utilized for the purpose of making materials, for example carbon nanotubes impregnated with, um, ferrocene polymers, and millions of other similar products.

Anyway, from the paper, some SEM images of the system:

Fig. 2 (a) SEM micrograph of the cathode non-woven carbon mat coated with P14AQ–CNT, with details of coated and uncoated areas. (b, c and f) SEM micrographs of increasing magnification of carbon fibers coated with P14AQ–CNT. (d) SEM micrograph of the uncoated carbon fibers. (e) TEM of PAQ–CNT showing the amorphous polyanthraquinone decorating the MWCNT, a result of the π π interaction.

The "π π" here is reference to the interaction between the aromatic rings of the benzoquinones and those of the carbon nanotubes.

Cyclic voltamograms of the reduction system:

Fig. 3 Superimposed CVs of PVFc–CNT ( ) and P14AQ–CNT ( ) under N2 and ( ) under CO2 in [Bmim][TF2N], at 20 mV s−1, vs. Fc, at T ∼ 21 °C. The two potential windows are shown; ΔV1 under CO2 and ΔV2 under N2.

(It is worth noting that increasingly more electrochemical reduction systems for carbon dioxide are known.)

More SEM images:

Fig. 4 (a) SEM micrograph of the anode non-woven carbon mat coated with PVFc–CNT with details of coated and uncoated areas. (b and c) SEM micrographs of carbon fibers coated with PVFc–CNT, the squares indicates the region which is magnified in the next micrograph. (d) SEM micrograph of the magnified polymer-coated CNTs from a different area on the electrode.

Nice photographs of the experimental apparatus:

Fig. 5 (a) Custom-made sealed chamber for closed system experiments with pressure transducer to monitor the changes in pressure as CO2 is adsorbed and desorbed upon cycling of the cell potential. The internal of the sealed chamber (b) with and (c) without the insulating cup. (d) Layers of the electrochemical cell assembled in the sealed chamber.

They obviously have a nice machine shop at MIT.

The system shows nice stability over a large number of cycles:

Fig. 6 (a) Changes in the number of moles of CO2 captured upon charging and discharge of the electrochemical cell over 10 cycles, normalized by the moles of quinone on the electrode ( ). The CO2 captured from and released to the chamber tracks the charge applied to the electrochemical cell, normalized by the area of the cell ( ). (b) The CO2 captured under different feed concentrations. (c) Capacity of cell over 7000 cycles. In a different set of experiments using a larger cell and cavity, (d) shows the effect of varying charging potential for a 1000 s capture and (e) shows the effect of varying the capture duration at −1.8 V capture potential. These experiments were conducted at T ∼ 21 °C.

A cartoon of the configuration of the system:

Fig. 7 (a) Schematic illustration of the parallel passage electrochemical cell contactor. The blue region indicates the saturated zone and the development of the mass transfer zone. (b) Photograph of a flow bed with a stack of the electrochemical cells.

Breakthrough at various concentrations:

Fig. 8 (a) Breakthrough profiles obtained at four inlet concentrations. (b) Same breakthrough profiles in (a) normalized by the inlet concentrations. (c) Breakthrough profile obtained from a large system operating at ∼10% inlet concentration. (d) Breakthrough profiles obtained from five replicate runs of a smaller system operating at ∼0.8% inlet concentration. These experiments were conducted at T ∼ 21 °C.

A chemical schematic of the process:

Scheme 1 (a) Two single-electron reduction waves of anthraquinone in the absence of electrophiles. (b) One two-electron reduction wave of anthraquinone in the presence of CO2.

The electrochemical reaction scheme:

Scheme 2 Reaction steps of the double carboxylation of quinones (a) in high and (b) low CO2 fluxes towards the anthraquinone electrode. E represents an electrochemical reaction step. C represents a chemical reaction step.

A cartoon of the electrochemical cell configuration:

Fig. 9 Cross-section of the electrochemical cell used in the simulations.

A graphic of charge and discharge of the system:

Fig. 10 Simulation of charging the electrochemical cell at different CO2 concentrations at constant current. (a) Potential difference of the cell. The change in concentration of quinone with charge is shown at (b) 0%, (c) 2% and (d) 5% CO2.

More on breakthrough (the physical saturation of the system):

Fig. 11 (a) Breakthrough profiles from simulation at 50% CO2 and charging potential of 1.7 V; inset: the concentration of CO2 in the channel and the concentration of unbound quinonic species in the cathode with bed volume. (b) Breakthrough profiles at multiple capture potentials at 50% inlet concentration. (c) Breakthrough profiles from simulation at multiple inlet concentrations at a charging potential of 1.3 V. (d) Normalized breakthrough profiles of the capture experiments in (c). (e) The current across the electrochemical cells during the capture experiments in (b). (f) The current across the electrochemical cells during the capture experiments in (c).

More electrochemical schematics:

Scheme 3 The reduction of anthraquinone at a potential higher than its second reduction potential with a limited flux of CO2.

An important graphic showing the energy penalties associated with carbon capture using this device:

Fig. 12 Fraction of CO2 released from the PAQ–CNT electrode with release voltage. At a constant capture cell voltage of 1.3 V, less of the bed is recovered with increasing release cell voltage, but the energy per mole of CO2 captured and released also decreases.

There are several types of "swing" approaches to gas separations commonly used, "temperature swing" - a simple well known example is to use a metal hydroxide, calcium hydroxide ("slaked lime" is often considered where the carbon dioxide is captured at low temperatures and the lime regenerated at very high temperatures - "pressure swing absorption" which relies on the differential diffusion of gases into porous beads, and this system, "electroswing absorption".

They are compared in this graphic:

Fig. 13 Comparison of temperature- (TSA), pressure- (PSA) and electro- (ESA) swing operations showing the impact of sorption isotherms on total working capacity.

I note that electricity is always produced at a thermodynamic loss, and thus electricity can, and often is, thermodynamically questionable. Irrespective of popular opinion to the contrary, electricity is not "green" or "clean."

However, there are circumstances where it can be utilized as a thermodynamic enhancer, specifically at very high temperatures, where it is a side product of another process. For example, the thermochemical splitting of water (or carbon dioxide) can be driven at high temperatures with far greater thermodynamic efficiency than via electrochemical approaches, most famously used for water electrolysis. Since the hydrogen and oxygen in the thermochemical water case, and carbon monoxide and oxygen in the carbon dioxide thermochemical case, will ultimately be brought to ambient temperatures, a temperature gradient is necessary, and as such can be utilized to drive turbines (Brayton), boil water (Rankine) or both, raising the efficiency.

Such temperatures are only economically, thermodynamically and environmentally viable with nuclear energy.

Cool idea; cool paper.

Have a nice day tomorrow.

Influence of Sewage Sludge on Ash Fusion during Combustion of Maize Straw

The paper I'll discuss in this post is this one: Influence of Sewage Sludge on Ash Fusion during Combustion of Maize Straw (Liu et al, Energy Fuels 2019, 33, 10, 10237-10246)

As things stand right now there is nothing "green," about the combustion of biomass. Biomass combustion is responsible for slightly less than half of the 7 million air pollution deaths we accept each year without a whimper of protest, although such deaths are largely - but hardly entirely - found in the third world as a result of the combustion of things like straw and garbage indoors in the absence of suitable stoves. The extent to which this practice is "renewable" is a function of the depletion of soils in which the biomass is grown. The real "green revolution" of the 1950's was dependent on the fertilization of soils with fixed nitrogen, which nevertheless a threat to the planetary atmosphere - and, certainly of as much or possibly even greater concern, phosphorous, an essential largely mined resource which is very much subject to depletion.

Despite the above statement it does seem to me that the combustion of biomass under oxyfuel combustion - that is in an atmosphere of pure oxygen - does have much to recommend it. It is entirely possible, it seems to me, to do this in a closed system, one with no material exchange to the environment under any but entirely controlled conditions - which would include useful materials, including relatively pure carbon dioxide available for reduction to solid forms of carbon. Any carbon so obtained would be effectively removed from the atmosphere, and thus the process would not be carbon neutral but rather carbon negative.

I've thought a great deal about such systems, and daydream about them quite frequently, but a purely technical issue is the material nature of the reactors which might do this. Biomass contains a number of inert materials, some of which at high temperatures in the presence of oxygen can be quite corrosive. A material for accomplishing this must therefore able to withstand high temperatures while avoiding corrosion. I believe modern materials science can meet the challenge, but it is in no way a "slam-dunk." Another important issue is heat exchange. Slags can form on the walls of reactors that are difficult to remove, and also prevent free heat exchange, representing an engineering difficulty for the recovery and use of energy in these kinds of processes.

I once read a book called "The Big Necessity" by a woman named Rose George which was a wonderful, um, not exactly "popular" book but written at a level not requiring a scientific education, a rumination on human shit, and by human shit I am not referring to the racist orange thug in the White House, but rather that brown stuff, human feces.

One of the greatest waste disposal on this planet, short only of the problem of dangerous fossil fuel waste, is precisely that, human shit.

Actually though, sewage sludge might well, if regarded correctly, represent a resource, inasmuch as it contains water, carbon, and the aforementioned phosphorous, a very serious matter.

The aforementioned paper points to some possible advantages to including sewage sludge in the combustion of biomass, and it caught my eye.

This excerpt from the introduction to the paper describes in more detail describes some of what I've just said, although I would regard the first two sentences as being highly questionable as practiced:

As a green renewable energy source, biomass has a zero-greenhouse gas emission characteristic and can convert solar energy and carbon dioxide into useful chemical energy. The rational use of biomass energy can not only reduce the consumption of fossil fuels but also effectively reduce environmental pollution. Therefore, the development of biomass energy is important for heat and power generation.(1,2) In the past few decades, woody biomass has mainly been used to produce electricity and heat. Due to the ever-increasing need for woody biomass in other fields (chemical products and liquid biomass fuels), the price of woody biomass has risen.(3−5) As a result, more attention is paid to agricultural waste.

The ash content of agricultural waste is usually much higher than that of the woody biomass, whereas the composition of ash is also more complex and varied.(6) Agricultural waste contains a large amount of alkali metals (potassium and sodium), as well as related inorganic elements including calcium, magnesium, chlorine, and sulfur.(2,6,7) During the combustion process, most of the potassium in the fuels reacts with silicon to form potassium silicates with a low melting point. These potassium-containing compounds with low melting points exist in a molten state and lead to sintering and slagging at the bottom of the furnace.(6−8) When using a fluidized bed as combustion or a gasification reactor, potassium may also react with the bed material to form low-melting eutectic compounds, which results in the agglomeration of particles, hinders fluidization, and even causes failure of the fluidization.(9−11) Some of the potassium-containing compounds evaporate in the gas phase (such as KOH, KCl, K2CO3, and K2SO4) and condense or deposit on the solid or liquid phase on a low-temperature heating surface, eventually destroying the heating surface.(2,6,8,12) Straw is the most common agricultural waste and has considerable potential for development in terms of combustion for heat and electric power.(13) During the combustion process, the chemical reaction mechanism and the theoretical knowledge of straw (mainly wheat, cotton, and maize) ash have been extensively studied...

....In recent years, a variety of chemical additives have been commonly used in industry to alleviate the problems of ash sintering and slagging. However, this may require high investments and may reduce the economic viability of using these additives in industrial applications.(16,17) Therefore, it is necessary to find a new low-cost, environmentally friendly, anti-slagging additive. The co-combustion of a suitable amount of sewage sludge (SS) and potassium-enriched biomass can alleviate the corrosion of the heating surface, which is a good alternative to using chemical additives.(18−22) SS contains a large amount of silicon, aluminum, phosphorus, iron, and calcium. It was found that SS can capture the alkali metals in the straw and react with it to form high-melting-point compounds, reducing the formation of low-melting-point potassium compounds.(23) Therefore, SS can effectively alleviate the problems of sintering and slagging of biomass ash...

...Li et al.(25) studied the reaction mechanism of phosphorus in SS and potassium in wheat straw. The results showed that the reaction formed high-melting-point potassium aluminosilicate and alkali metal phosphate, which increased the potassium fixation rate of mixed ash. Skoglund et al.(26) conducted a cofiring experiment between biomass and municipal sludge. It was found that the alkali-chloride in biomass ash transformed into alkali metal sulfate after adding SS, which could reduce the risk of alkali metal chloride-related corrosion and slagging.

In general, SS can be used as an anti-slagging additive for the combustion of maize straw (MS), but the scientific evidence for evaluating engineering application feasibility and conducting cost comparison analyses was necessary. The main objective of this work is to study the effect of SS on MS alkali metals’ release characteristics and slagging. The potassium retention rate and the sodium retention rate of MS, SS, and their blends, as well as their slagging characteristics and ash characteristics, were obtained. The results obtained in this experimental study can provide data and theoretical references for sewage sludge’s use as an anti-slagging additive.

This is a Chinese paper and the sewage sludge in this case was dried. (I'm personally not sure that drying the sludge would be a good idea in the long term, as this incurs an energy penalty, but this is their process, not mine.)

They usefully, show the form of the maize straw (MS) and by was of eliminating the puerile silliness and squeamishness associated with this nevertheless important waste form, show a picture of the dried sewage sludge used in their experiments:

The caption:

Figure 1. (a) MS raw material and (b) SS drying raw material.

The maize straw was locally grown:

2.1. Samples. The molding MS used in the experiments originated in Jilin province, China, and is a major crop in northeast China. MS is directly processed in the farmland; a small amount of black soil may be mixed in MS. The MS is first crushed and then compression to form molding MS. SS from Jilin sewage treatment plant was selected, as shown in Figure 1. First, the molding MS and SS were naturally dried and then dried in an air-drying oven at 105 °C to a constant weight. Finally, they were pulverized to obtain particles with a size less than 200 μm. Ultimate and proximate analyses of raw materials are presented in Table 1. The raw samples were ashed in accordance with ASTM/E1755-01. The chemical compositions of the MS and SS ashes were analyzed using X-ray fluorescence (XRF) (ZSX Primusll RIGAKU), and the results are listed in Table 2. To study the effect of SS as an additive on MS ash slagging, the blends of MS−SS with sludge mass ratio in maize straw-sludge mixture of 10 and 20% (by weight) were used and were denoted as M9S1 and M8S2, respectively. The amount of additive was selected based on two factors: (1) the amount of mixed additive should be sufficient to meet the expected reaction requirements, and can significantly reduce the issues related to ash melting and slagging, and (2) the mixing ratio should be practical and feasible. When the additive is mixed, the increased ash content after combustion should not be too high. This is because it becomes difficult for the combustion equipment to remove massive amounts of ash.

2.2. Combustion Process. The combustion experiments were conducted in a muffle furnace. The door was kept semiopen to ensure that the sample was completely burnt in the air. The experiments were conducted at temperatures of 700, 800, and 900 °C. When the furnace temperature reached the set value, four samples (MS, M9S1, M8S2, and SS) were sent to the muffle furnace. To ensure the burning of fuel, each experiment lasted 60 min. After this, ash was collected for subsequent analysis.

Table 1:

Table 2:

The raw materials were analyzed by atomic absorption spectroscopy (AAS) after digestion whereas the ash was measured by XRF. In my opinion ICP/MS is the "go to" technology for elemental analysis and is preferred to AAS, but AAS has a long history and is generally satisfactory for use except where very sensitive analysis is required. (ICP/MS might have picked up things like cadmium and lead, the former being a big problem in agriculture, albeit in Southern as opposed to Northern China.) XRF has the advantage of being able to say something about speciation and also the capability of picking up chlorine, a very important element when one is considering issues in corrosion, a topic I may discuss when discussing other papers that have recently caught my eye in the carbon capture via biomass schemata.

This is how the ash looks when prepared at different temperatures:

The caption:

Figure 2. Morphology of ash at different temperatures.

There is a tendency for the alkali metals to migrate during the combustion process, via volatilization. Corrections were applied to reflect the differences between the starting material and the ash.

The following graphics touch on that point and the ability of sewage sludge to mitigate this migration.

The caption:

Figure 3. Effect of SS on the ability to fix alkali metals. (a) Potassium retention ratio; (b) sodium retention ratio; (c) potassium retention growth rate; (d) sodium retention growth rate.

XRD (X-ray diffraction) analysis of the speciation observed:

The caption:

Figure 4. XRD pattern of ash mixed with MS and SS under different conditions (a: MS; b: SS; c: M9S1; and d: M8S2). (1) SiO2, (2) KCl, (3)K2SO4, (4) KAlSi3O8, (5) K2SiO3, (6) Ca7Mg2P6O24, (7) Fe2O3, (8) Ca2P2O7, (9) Al2SiO5, (10) KAlSi2O6, (11) KAlSiO4, (12) CaAl2Si2O8, (13) KCaFe(PO4)2.

The caption:

Figure 5. Micromorphology of MS at different temperatures (magnification of 500×).

The following graphics refer to the XRD diffraction Energy Dispersive Spectroscopy, where the composition of the marked particles is determined.

The caption:

Figure 6. Micromorphology of SS at different temperatures (magnification of 500×).

A table of results:


The caption:

Figure 7. Micromorphology of M9S1 and M8S2 at different temperatures (magnification of 500×).

And finally:

The caption:

Figure 8. Micromorphology of MS and M9S1 at 800 °C (magnification of 5000×).

Some commentary:

3.5.2. Evaluation of Slagging Tendency on the Basis of the Chemical Compositions of the Ash
The difference in the ash chemical composition is the root cause for the difference in biomass ash melting temperature. The most commonly used indicators for determining the degree of biomass slagging are the ratio of alkali to acid, the ratio of silicon to aluminum, and the ratio of iron to calcium. The alkali acid ratio refers to the ratio of the sum of the alkaline components (oxides of iron, calcium, magnesium, potassium, etc.) to the sum of the acidic components (oxides of silicon, aluminum, and titanium) in the biomass ash. The ratio of silicon to aluminum refers to the ratio of the oxide of silicon to the oxide of aluminum in the biomass ash; the ratio of iron to calcium refers to the ratio of the oxide of iron to the oxide of calcium in the biomass ash.

The paper is interesting because it tells us a great deal about the properties of biomass both in the form of straw and in the form of sewage sludge, the latter being a material that represents a huge environmental problem but also may prove to be an important resource.

This is an air based combustion system, and differs from other alternatives to processing, for example, high temperature steam reforming, or dry (CO2) reforming, and oxyfuel combustion.

The paper does not address the suitability of these ashes for the recovery of phosphorous, for example, and other elements, nor does it specifically address the materials science issues connected with, for example, corrosion and scaling.

Nevertheless, this is very valuable information in defining a path forward for future generations to recover from what we have done to them.

I trust you're enjoying your work week.

Mathematical Modeling of a Microfluidic for the Reduction of Carbon Dioxide.

The paper I'll discuss in this post is this one: Correlating Uncertainties of a CO2 to CO Microfluidic Electrochemical Reactor: A Monte Carlo Simulation (Raman et al, Ind. Eng. Chem. Res.2019, 58. 42, 19361-19376) It's in the current issue of this journal as of this writing.

The paper's introductory graphic is a cartoon evoking an Ishikawa "fish plot" diagram:

Removal of the dangerous fossil fuel waste carbon dioxide, which is killing the planet, from the atmosphere is only possible if future generations have something to do with it. Although our current practice is to burn a dangerous greenhouse gas, dangerous natural gas; a liquid, dangerous petroleum; or a solid, dangerous coal, to produce this dangerous fossil fuel waste, with sufficient energy, the combustion of these dangerous fossil fuels is reversible, via a reaction known as the Boudouard Reaction.

I've written at length in various places about this reaction.

Here, from another paper about adjusting the thermal equilibrium the reaction describes, is a graphic that describes the Boudouard reaction:

Source: Microwave-Specific Enhancement of the Carbon–Carbon Dioxide (Boudouard) Reaction

The reaction is shown at the top of this graphic. The equilibrium lines in this graphic, one is thermal and the other driven by microwave radiation, shows that one can make carbon - which under the right conditions can be processed into useful materials - from carbon monoxide if one removes one of the reactants, which would be carbon dioxide.

A great deal has been written in the scientific literature about reducing carbon dioxide to carbon monoxide, and this paper is just one example. Microfluidic devices are just what they sound like, devices with small channels that are designed to maximize surface area by forcing a fluid - in this case as gas, the dangerous fossil fuel waste carbon dioxide - through tiny channels. Although the technology for making these devices has advanced to a high level only in recent times, these types of devices have long been known: A well understood microfluidic device, in this case for fluid exchange, is a human lung.

There are many variations on the Boudouard reaction, some of my personal favorites being "dry reforming" of waste organic materials, municipal and industrial carbon based wastes for example, or the dry reforming of biomass. The method described here is electrochemical. Although electricity is a thermodynamically questionable approach to energy storage or materials processing, there are certain conditions where grid and load balancing make waste electricity available for electrochemical processes, for example in the case of a plant designed for continuous operation that runs during low load periods.

From the paper's introduction:

Climate change and global warming are among the major present-day concerns due to increasing CO2 emissions across the world.(1) Following several global initiatives to address these concerns, such as the Paris Agreement and the Kyoto Protocol, CO2 reduction and CO2 utilization technologies have attracted increased attention.(2−7) Utilizing captured CO2through electrochemical methods not only serves as an alternative to carbon sequestration but also helps toward achieving a carbon neutral energy cycle when operated by renewable sources such as solar, wind, and so forth.(8−12)These electrochemical reactors convert the feedstock, CO2, to useful chemicals such as formic acid and formates,(13,14) alcohols,(15,16) carbon monoxide (CO),(17,18) ethylene,(19,20) and methane.(21) The selectivity of the electrochemical conversion depends on three major factors: (1) the reaction mechanism, in the form of a cathode side catalyst, (2) the ion-adsorbate interaction, in the form of the electrolyte species, and (3) the electrochemical activation energy, in the form of the applied potential.

Typically, the selectivity toward one or more of the above-mentioned products is controlled by the choice of the cathode side catalyst including metal surfaces,(21) metal nanoparticles,(22) metal oxides,(23) organometallic molecules,(24) and metal and covalent organic frameworks.(25,26) Several recent reviews outline the recent trends in the selectivity-based electrocatalyst development.(27−30) These electrocatalysts are usually studied and screened in a three-electrode setup or an H-cell. However, these reactor configurations can be mass-transport-limited.(31,32) In addition, these systems are batch reactors and are not scalable, making them less relevant for commercialization.

To overcome mass transport limitations and to achieve scalability, flow cell architectures were investigated. These flow cell reactors are of different types, viz., solid oxide electrolysis cells,(33,34) membrane-based electrolytic cell,s(32,35) and microfluidic flow cells (MFCs).(36,37) Berlinguette and co-workers(38) present a detailed account on the development of flow cells for the electroreduction of CO2. Bevilacqua et al.(39) discuss the efforts to scale up these flow cells. Despite a large number of such studies being experimental, there has also been recent interest toward the mathematical modeling of such systems.(6,40−43) These mathematical models, depending on their complexity, can shed light on the intricate interplay between gas transport and electrochemistry. Along with suitable experiments, through these models, we can delve deeply into the effects of various design, physical, material, operating, and electrochemical parameters on the functioning of the reactor...

...during the large-scale fabrication of these microfluidic reactors, the properties may deviate from the values specified for a desired output. Therefore, in practical applications, it is difficult to identify these uncertainties and estimate their influence on the conversion efficiency, reactor performance, and selectivity.
To capture this random yet probabilistic nature of variation of the input parameters, a stochastic method such as Monte Carlo simulations (MCS) is necessary. Through MCS, we can identify not only the most critical input parameter in a given range of operating parameters but also uncover the effects of the simultaneous variation of different input parameters on the system. Following this stochastic approach, deterministic analyses such as identifying optimal regions of operation, different choices of materials, and robust control strategies, diagnoses, and prognoses can be carried out...

...In light of the lack of a stochastic technique that can record the probabilistic nature of the input parameters and their relative impact on the current density and the trade-off between cell performance and conversion efficiency and the Faradaic efficiency of an MFC reactor, first we conduct MCS of a 2D mechanistic model of the MFC reactor. To achieve this, we generate a large random population of input parameters and simulate a detailed mechanistic model of the MFC reactor across the parameter space. A fish-bone diagram relating these stochastic input parameters to the response variables is illustrated in Figure 1. The varied stochastic parameters can be classified as (a) geometric/design, consisting of the thickness of each functional layer and the cell length and width; (b) physical, involving the porosity of the functional layers and the dynamic viscosity of the feed gas; (c) material, consisting of the electrical conductivity of the different functional layers and the ionic conductivity of the electrolyte; (d) operating, including the applied cell potential, temperature, feed gas flow rates, and inlet feed mole fractions; and (e) electrochemical parameters, including the exchange current densities and charge transfer coefficients...

Figure 1:

The caption:

Figure 1. Cause and effect “fish-bone” diagram illustrating the varied stochastic parameters influencing the MFC CO2 converter.

After some following discussion the authors write this to describe their approach to modeling putative electrochemical devices for reducing carbon dioxide to carbon monoxide:

Microfluidic Cell Model. We consider an MFC consisting of several functional layers: cathode and anode current collectors, cathode and anode gas flow fields, cathode and anode gas diffusion electrodes, and an electrolyte channel. An aqueous electrolyte solution flows between the cathode and the anode gas diffusion electrodes. The cathode and anode gas diffusion electrodes are coated with the catalyst at the interface with the electrolyte, giving rise to a gas diffusion layer and a catalyst layer. A detailed schematic of the MFC along with the computational domain is presented in Figure 2.

Figure 2:

The caption:

Figure 2. Overall 2D schematic of the microfluidic CO2 converter presenting different functional layers and the computational domain with boundaries marked with Roman numerals: (I) cathode feed inlet; (II) cathode and anode outlets; (III) anode feed inlet; (IV) insulated vertical walls; (V) cathode current collector horizontal wall; (VI) gas flow field−current collector interface; (VII) gas diffusion layer−gas flow field interface; (VIII) catalyst layer−gas diffusion layer interface; (IX) electrolyte−catalyst layer interface; (X) anode current collector horizontal wall.

The reactions considered here by the authors involve a hydrogen side product. There are examples of such reactions which do not involve hydrogen, although in the electrical case, the reduced carbon dioxide is made into hydrocarbons and/or alcohols.

From the text:

The electrochemical reduction of CO2 to CO takes place in the cathode catalyst layer. Along with this reaction, when a sufficiently large overpotential is applied, the water diffused out of the electrolyte also undergoes reduction to produce H2 gas. On the cathode side, the production of CO and the hydrogen evolution reaction (HER) can be summarized as

The oxygen evolution reaction (OER) on the anode side is given as

The latter reaction, the oxygen evolution reaction at an electrode is the subject of much discussion in the scientific literature, because of its nature as a 4 electron reaction. Although electrolysis is well known and often practiced, although most of the world's hydrogen is made by reforming dangerous natural gas, it places limits on the energy efficiency of electrochemical water splitting and is thus open to improvement.

A table of the variables considered.

Some other graphics from the the paper which may or may not mean much:

The caption:

Figure 3. (a) Polarization curve and (b) conversion efficiency (—) and Faradaic efficiency (---) as a function of the applied cell potential, corresponding to the mean values of input parameters.

The caption:

Figure 4. Sample distribution of the inlet CO2 mole fraction, xCO2in (bars), and the fitted normal distribution functions (lines) for different sample sizes: (a) 170 and (b) 103.

The caption:

Figure 5. Ranking of stochastic parameters under the IND scenario at Ecell = −2.7 V (black), −2.8 V (blue), −2.9 V (green), −3.0 V (gray), and −3.1 V (violet) for (a) cell performance, (b) conversion efficiency, and (c) Faradaic efficiency.

The caption:

Figure 6. Ranking of stochastic parameters under the SIM scenario at Ecell = −2.7 (black), −2.8 (blue), −2.9 (green), −3.0 (gray), and −3.1 V (violet) for (a) cell performance, (b) conversion efficiency, and (c) Faradaic efficiency.

The caption:

Figure 7. Scatter plots of the cell performance against αCO, xCO2in, and L in (a–c) the IND scenario (circles) and (d–f) the SIM scenario (circles). The triangle represents the cell performance corresponding to the mean values of input parameters.

The caption:

Figure 8. Comparison of the REG model (circles) and GPR model (dots) for the SIM scenario at Ecell = (a) −2.7, (b) −2.8, (c) −2.9, (d) −3.0, and (e) −3.1 V.

The caption:

Figure 9. Probability distribution of cell performance for the SIM scenario at Ecell = (a) −2.7, (b) −2.8, (c) −2.9, (d) −3.0, and (e) −3.1 V.

In the above, SIM, IND, and REG refer to manner the Monte Carlo Simulation is run, i.e. the process and sequence by which the variables are subject to perturbations.

This kind of research can be obscure and arcane, but it is very, very, very important nonetheless.

A word we hear too much is could, but I'll use it anyway. Properly focused we could do so much with the power of our scientific tools, but regrettably we are doing very little.

Have a nice weekend.

Bacterial biodiversity drives the evolution of CRISPR-based phage resistance

The paper I'll discuss in this brief post is this one: Bacterial biodiversity drives the evolution of CRISPR-based phage resistance (Ellinor O. Alseth, Elizabeth Pursey, Adela M. Luján, Isobel McLeod, Clare Rollie & Edze R. Westra, Nature 574, 549–552 (2019))

(The authors appear to be 100% women, nice to see.)

CRISPR is very much in the scientific news these days, both as a research tool and as a possible therapeutic agent for a host of genetic diseases, including (but hardly limited to) cancer, which may be thought of as a somatic genetic disease. With respect to its use as a research tool, a few days back I posted in this space, a report utilizing CRISPR to interrogate the toxin resistance that is observed in the Monarch butterfly. Genome editing retraces the evolution of toxin resistance in the monarch butterfly.

I haven't really paid much attention to the nuts and bolts of CRISPR technology, at least until a chance conversation at a science oriented social event stimulated me to do so. I was of course, aware of its role in gene therapy, but not of the basic science underlying it. The appearance of two papers in two subsequent issues of Nature stimulated more interest in the origins and use of this technology.

As many people know, among the many things we are leaving for future generations besides an atmosphere destroyed by appeals to denial, mysticism, fear and ignorance, and effective depletion of many of the elements in the periodic table, is a plethora of dangerous antibiotic resistant bacteria. One avenue for addressing this resistant bacteria is to appeal to an old idea, viral antibiotics, inoculating people with viruses that are known to attack and kill bacteria, viruses know as phages. (Phages are also widely used as research and production tools, particularly for the insertion of genes into organisms in the biotech industry.) This old idea is worth a look given that our understanding of molecular biology has entered a golden age which, one hopes, will be maintained despite the rise of anti-intellectualism on both political extremes.


The CRISPR/CAS-9 system is actually the immune system for bacteria, and this paper is about how this system is utilized to develop resistance to phages.

From the abstract of the paper:

About half of all bacteria carry genes for CRISPR–Cas adaptive immune systems1, which provide immunological memory by inserting short DNA sequences from phage and other parasitic DNA elements into CRISPR loci on the host genome2. Whereas CRISPR loci evolve rapidly in natural environments3,4, bacterial species typically evolve phage resistance by the mutation or loss of phage receptors under laboratory conditions5,6. Here we report how this discrepancy may in part be explained by differences in the biotic complexity of in vitro and natural environments7,8. Specifically, by using the opportunistic pathogen Pseudomonas aeruginosa and its phage DMS3vir, we show that coexistence with other human pathogens amplifies the fitness trade-offs associated with the mutation of phage receptors, and therefore tips the balance in favour of the evolution of CRISPR-based resistance. We also demonstrate that this has important knock-on effects for the virulence of P. aeruginosa, which became attenuated only if the bacteria evolved surface-based resistance. Our data reveal that the biotic complexity of microbial communities in natural environments is an important driver of the evolution of CRISPR–Cas adaptive immunity, with key implications for bacterial fitness and virulence.

From the introduction:

P. aeruginosa is a widespread opportunistic pathogen that thrives in a range of different environments, including hospitals, where it is a common source of nosocomial infections. In particular, it frequently colonizes the lungs of patients with cystic fibrosis, in whom it is the leading cause of morbidity and mortality9. In part fuelled by a renewed interest in the therapeutic use of bacteriophages as antimicrobials (phage therapy)10,11, many studies have examined whether and how P. aeruginosa evolves resistance to phage (reviewed in ref. 12). The clinical isolate P. aeruginosa strain PA14 has been reported to predominantly evolve resistance against its phage DMS3vir by the modification or complete loss of the phage receptor (type IV pilus) when grown in nutrient-rich medium5, despite carrying an active CRISPR–Cas adaptive immune system. By contrast, under nutrient-limited conditions, the same strain relies on CRISPR–Cas to acquire phage resistance5. These differences are due to higher phage densities during infections in nutrient-rich compared with nutrient-limited conditions, which in turn determines whether surface-based resistance (with a fixed cost of resistance) or CRISPR-based resistance (infection-induced cost) is favoured by natural selection5,13. Although these observations suggest abiotic factors are crucial determinants of the evolution of phage resistance strategies, the role of biotic factors has remained unclear, even though P. aeruginosa commonly co-exists with a range of other bacterial species in both natural and clinical settings14,15. We proposed that the presence of a bacterial community could drive increased levels of CRISPR-based resistance evolution for two main reasons. First, reduced densities of P. aeruginosa in the presence of competitors may limit phage amplification, and favour CRISPR-based resistance5. Second, pleiotropic costs associated with the mutation of phage receptors may be amplified during interspecific competition.

The authors used in their experiments, a streptomycin resistant strain (PA14) of the P. aeruginosa pathogenic organism and cutlured it in the presence of three other pathogenic bacterial species that are not infected by the DMS3vir virus, Staphylococcus aureus, Burkholderia cenocepacia and Acinetobacter baumannii.

Some pictures from the paper suggesting the results:

The caption:

a, Proportion of P. aeruginosa that acquired surface modification (SM) or CRISPR-based resistance, or remained sensitive at 3 d.p.i. with phage DMS3vir when grown in monoculture or polycultures, or with an isogenic surface mutant (6 replicates per treatment, with 24 colonies per replicate, n = 36 biologically independent replicates). Data are mean ± s.e.m. b, Microbial community composition over time for the mixed-species infection experiments. AB, A. baumannii; BC, B. cenocepacia; PA14, P. aeruginosa; SA, S. aureus.

The authors also evaluated the potential pathogenic implications of this result by growing cultures in synthetic sputum.

The rise of CRISPR resistance, as shown in the previous graphic in P. aeruginosa in the presence of bacterial diversity is clearly observed.

Another graphic:

The caption:

a, Relative fitness of a P. aeruginosa clone with CRISPR-based resistance after competing for 24 h against a surface-modification clone at varying titres of phage DMS3vir in the presence or absence of a mixed microbial community. Regression slopes with shaded areas corresponding to 95% confidence interval (n = 144 biologically independent samples). b, Relative fitness after competition in the absence of phage, but in the presence of other bacterial species individually or as a mixture. Data are mean and 95% confidence intervals (n = 144 biologically independent samples).

Another graphic touching on virulence:

a, Time until death (given as the median ± one standard error) after infection with PA14 clones that evolved phage resistance in either the presence or the absence of a mixed microbial community (n = 376 biologically independent samples, analysed using a Cox proportional-hazards model with Tukey contrasts). LT50, median lethal time. b, The effect of the type of evolved phage resistance (CRISPR-based or surface-modification-based) on bacterial motility (n = 981 biologically independent samples). Box plots show the median with the upper and lower twenty-fifth and seventy-fifth percentiles, the interquartile range, and outliers shown as dots. c, The effect of the type of resistance on in vivo virulence (time until death, given as the median ± one standard error; n = 981, analysed using a Cox proportional-hazards model with Tukey contrasts).

Some excerpts from the concluding discussion:

We have shown that the evolutionary outcome of bacteria–phage interactions can be fundamentally altered by the microbial community context. Although conventionally studied in isolation, these interactions are usually embedded in complex biotic networks of interacting species, and it is becoming increasingly clear that this can have key implications for the evolutionary epidemiology of infectious disease24,25,26,27,28. Our work shows that the community context can shape the evolution of different host-resistance strategies. Specifically, we find that the interspecific interactions between four bacterial species in a synthetic microbial community can have a large effect on the evolution of phage-resistance mechanisms by amplifying the constitutive fitness cost of surface-based resistance5. The finding that biotic complexity matters complements previous work on the effect of abiotic variables and force of infection on the evolution of phage resistance5...

...Primarily, the absence of detectable trade-offs between CRISPR-based resistance and virulence, as opposed to when bacteria evolve surface-based resistance, suggests that the evolution of CRISPR-based resistance can ultimately influence the severity of disease. Moreover, the evolution of CRISPR-based resistance can drive more rapid phage extinction29, and may in a multi-phage environment result in altered patterns of cross-resistance evolution compared with surface-based resistance30. The identification of the drivers and consequences of CRISPR-resistance evolution might help to improve our ability to predict and manipulate the outcome of bacteria–phage interactions in both natural and clinical settings.

Interesting, I think, and important.

Have a nice day tomorrow.

Genome editing retraces the evolution of toxin resistance in the monarch butterfly.

The paper I'll discuss in this post is this one: Genome editing retraces the evolution of toxin resistance in the monarch butterfly. (Whiteman et al, Nature 574, 409–412 (2019))

One of the happiest memories of the childhood of my sons was when we went to a local park, the New Jersey side of Washington's Crossing Park, where the ranger showed us how to collect monarch butterfly eggs, which were laid on the underside of milkweed plants, poisonous plants that grow wild all around here. We took the leaves home, put them in a butterfly cage, and ultimately the eggs hatched, and caterpillars began munching the leaves. We keep collecting them from the fields around here, until they formed cocoons, pupae, and finally emerged as butterflies, which we released.

It was a beautiful, wonderful experience, and probably one I would have never had were I not a father.

The Monarch's don't really "migrate" as individuals; each year several generations make their way across North America from Mexico, breeding repeatedly, happily munching toxic milkweed all across America.

A truly wondrous life form!

CRISPR/CAS-9 is a gene editing tool developed by Jennifer Doudna and Emmanuelle Charpentier that utilizes a bacterial protein, known as CAS-9, which is in a sense an "immunity defense" for procaryotic organisms, that operates in conjunction with a guide RNA sequence that acts much like "interfering RNA," "iRNA" relying on complementary. By appropriate editing of the RNA sequence, this system can be modified for the purpose of gene editing, both for research purposes and, perhaps, for therapeutic modalities.

(Having participated in my career in a number of "really hot" biomedical fads, I tend to be more "wait and see" than over the top enthusiastic for these sweeping claims.)

The paper cited above is using CRISPR/Cas-9 as a research tool to discover how the Monarch butterfly became immune to the plant toxins in milkweed. This toxicity by the way, protects the Monarch from predators, since the butterflies themselves are toxic.

From the abstract:

Published: 02 October 2019
Genome editing retraces the evolution of toxin resistance in the monarch butterfly
Marianthi Karageorgi, Simon C. Groen, Fidan Sumbul, Julianne N. Pelaez, Kirsten I. Verster, Jessica M. Aguilar, Amy P. Hastings, Susan L. Bernstein, Teruyuki Matsunaga, Michael Astourian, Geno Guerra, Felix Rico, Susanne Dobler, Anurag A. Agrawal & Noah K. Whiteman
Nature volume 574, pages409–412 (2019) | Download Citation

Identifying the genetic mechanisms of adaptation requires the elucidation of links between the evolution of DNA sequence, phenotype, and fitness1. Convergent evolution can be used as a guide to identify candidate mutations that underlie adaptive traits2,3,4, and new genome editing technology is facilitating functional validation of these mutations in whole organisms1,5. We combined these approaches to study a classic case of convergence in insects from six orders, including the monarch butterfly (Danaus plexippus), that have independently evolved to colonize plants that produce cardiac glycoside toxins6,7,8,9,10,11. Many of these insects evolved parallel amino acid substitutions in the α-subunit (ATPα ) of the sodium pump (Na+/K+-ATPase)7,8,9,10,11, the physiological target of cardiac glycosides12. Here we describe mutational paths involving three repeatedly changing amino acid sites (111, 119 and 122) in ATPα that are associated with cardiac glycoside specialization13,14. We then performed CRISPR–Cas9 base editing on the native Atpα gene in Drosophila melanogaster flies and retraced the mutational path taken across the monarch lineage11,15. We show in vivo, in vitro and in silico that the path conferred resistance and target-site insensitivity to cardiac glycosides16, culminating in triple mutant ‘monarch flies’ that were as insensitive to cardiac glycosides as monarch butterflies. ‘Monarch flies’ retained small amounts of cardiac glycosides through metamorphosis, a trait that has been optimized in monarch butterflies to deter predators17,18,19. The order in which the substitutions evolved was explained by amelioration of antagonistic pleiotropy through epistasis13,14,20,21,22. Our study illuminates how the monarch butterfly evolved resistance to a class of plant toxins, eventually becoming unpalatable, and changing the nature of species interactions within ecological communities2,6,7,8,9,10,11,15,17,18,19.

An excerpt of the introductory text:

Convergently evolved substitutions in ATPα have been hypothesized to contribute to cardiac glycoside resistance in the monarch butterfly and other specialized insects via target-site insensitivity (TSI) in the sodium pump6,7,8,9,10,11. However, it is unclear whether the changes are sufficient for resistance in whole organisms6,7,8,9,10,11,15,18,23 or are ‘molecular spandrels’—candidate adaptive alleles that do not confer a fitness advantage when tested more rigorously1,5. In addition, the evolutionary order of substitutions suggests a constrained adaptive walk11,13,14,20,21,22,24, but an in vivo genetic dissection has not been conducted, so it is not possible to draw conclusions about the adaptive role of these substitutions1,2,3,4,5,15.

We have identified a core set of amino acid substitutions in cardiac glycoside-specialized insects that define potential mutational paths to resistance and TSI. We focused on the first extracellular loop (H1–H2) of ATPα, where most candidate TSI-conferring substitutions occur7,8,9,10,11 (Fig. 1a). We used maximum likelihood to reconstruct ancestral states for cardiac glycoside specialization (feeding and sequestering) and amino acids within the H1–H2 loop of ATPα across a species phylogeny...

The authors identified a series of known amino acid substitutions in the ATPα in the Monarch, at residues 111, 119, and 122.

A figure from the text:

The caption:

a, Protein homology model of Drosophila melanogaster ATPα (navy) superimposed on a Sus scrofa ATPα crystal structure (light grey) with ouabain (yellow) in the binding pocket. Residues 111, 119 and 122 (sticks) within the H1–H2 extracellular loop are associated with feeding on cardiac glycoside-producing plants and toxin sequestration. b, Maximum likelihood phylogeny based on 4,890 bp from Atpα and coi, with maximum likelihood ancestral state reconstruction (ASR) of feeding and sequestering states, estimated from the states of extant species (inner band of squares). Reconstructions are shown as nodal pie graphs (white, neither feeding nor sequestering; green, feeding; purple, feeding and sequestering), and the number of substituted sites at positions 111, 119 and 122 along branches in grey-scale (light grey 0, medium grey 1, dark grey 2, black 3), based on maximum likelihood ASR of H1–H2 loop amino acid sequences. Black asterisks indicate the Atpα copy number for species with multiple paralogues. c, ATPα substitutions inferred from ASR at positions 111 (blue), 119 (yellow) and 122 (red) in 21 lineages where specialization occurred independently. d, P value distribution from a set of randomized tests to determine the reproducibility of substitutions observed along mutational paths among sub-sampled groups compared to randomly permuted substitutions. On average, 4.9% (considering all mutational steps) of randomly permuted trajectories demonstrate a degree of ordering equal to or greater than observed mutational paths.

The DNA of the Drosophilia fly was modified via editing to provide the necessary homology allowing the organisms to feed on cardiac glycosides as well.

Another figure from the paper:

a, The monarch butterfly lineage with the substitutions observed in the H1–H2 loop of ATPα (adapted from Petschenka et al.)11. b, Non-synonymous point mutations in the edited DNA sequence of the native Atpα in Drosophila knock-in lines code for the substitutions at sites 111, 119 and 122. Codons are underlined. c, d, Larval–adult survival (c) and adult survival (d) of flies reared on diets with ouabain were different between monarch lineage knock-in lines and control lines (QAN = engineered control; QAN* = w1118 wild type). Symbols represent the mean ± s.e.m. of 3–6 biological replicates (50 larvae and 10 females per replicate in c and d, respectively). Curves were fit using a logistic regression model for each line. Pairwise differences in survivorship trajectories between lines were evaluated with a likelihood ratio test on the significance of the interaction term between genotype (line) and ouabain concentration in a logistic regression for each pair of lines (letters). e, Egg–adult survival on diet supplemented with Asclepias curassavica leaves relative to control diets (n = 3–4; 100–200 eggs per replicate, see Methods; mean ± s.e.m.) was different between monarch lineage knock-in lines and QAN* (one-way ANOVA, P = 0.0035 followed by post hoc Tukey’s tests (letters)). f, Ouabain concentrations in diet versus adult fly bodies among monarch lineage knock-in lines (n = 2–4 biological replicates per group). Adult flies had not fed since eclosion. Genotype and dietary ouabain concentration influenced the probability of detecting ouabain in post-eclosion flies (logistic regression and likelihood ratio test, genotype two-sided P = 0.024, dietary ouabain concentration two-sided P = 6.344 × 10−5). Further information on experimental design and statistical test results is found in the Source Data.

Some observations:

We obtained in vivo evidence for adaptation in monarch lineage Atpα through larval–adult and adult survival experiments. Knock-in fly lines were reared on yeast medium with increasing concentrations of ouabain, a hydrophilic cardiac glycoside6 (Fig. 2c, d, Extended Data Figs. 4, 5). LAN, the first genotype to evolve, increased larval–adult survival at lower ouabain concentrations, but survival declined sharply as concentrations increased. LAN also increased adult survival at lower ouabain concentrations. LSN, the second genotype to evolve, increased larval–adult survival at the highest ouabain concentrations. The next step, VSN, provided the same larval–adult and adult survival benefit as LSN. Finally, the survival of ‘monarch flies’ carrying the monarch butterfly genotype (VSH) was unaffected by even the highest levels of ouabain in larvae and adults6,9,11,18 (Fig. 2c, d), which was not due to reductions in feeding rate or toxin ingestion (Extended Data Fig. 6).

When knock-in line eggs were placed on medium containing the suite of cardiac glycosides found in the leaves of the milkweed species Asclepias curassavica and A. fascicularis6, monarch lineage fly genotypes generally showed increased egg–pupal and egg–adult survival rates (Fig. 2e, Extended Data Fig. 7), although not always for VSN (Extended Data Figs. 3, 7). The LSN, VSN and VSH genotypes may enable insects to cope with the complex milieu of cardiac glycosides encountered during host shifts to these plants.

The monarch butterfly ATPα substitutions at positions 111, 119 and 122 may unlock a passive evolutionary route to cardiac glycoside sequestration, as we found small amounts of ouabain in newly emerged adult ‘monarch flies’ reared as larvae on a diet containing ouabain (Fig. 2f). However, toxin concentrations were far lower than in monarch butterflies, and the location of ouabain in flies is unclear6,17,18.

Another graphic:

The caption:

a, In vitro ouabain sensitivity of Na+/K+-ATPase activity in extracts of monarch lineage knock-in and control line fly heads (solid lines; QAN, engineered control; QAN*, w1118 wild type), against activity in extracts of monarch butterfly and pig nervous tissue (positive and negative control, dashed red and black line, respectively). Symbols represent the mean ± s.e.m. of 3–7 biological replicates. log10[IC50] (half-maximum inhibitory concentration) values for the Na+/K+-ATPases were estimated after fitting four-parameter logistic regression curves, and were different between genotypes (one-way ANOVA (P < 0.0001) with post hoc Tukey’s tests (letters)). b, Mean docking scores (± s.e.m. of five replicate calculations) from molecular simulations of ouabain binding to the Na+/K+-ATPases found along the monarch lineage showed differences between genotypes (one-way ANOVA (P = 0.0001) with post hoc Tukey’s tests (letters)). c, Effects of the substitutions Q111L, A119S and their combination on larval–adult survival on diets with 30 mM ouabain. Symbols represent the mean ± s.e.m. of three biological replicates (50 larvae each). The effect of mutations A119S and Q111L together was nearly threefold greater than the combined individual effects on survivorship (logistic regression, interaction effect between mutations: ***P = 2.36 × 10−15), indicating positive epistasis. d, Duration of paralysis following mechanical shocks (that is, bang sensitivity; n = 60 five-to-six-day-old adult flies). Bang sensitivity was affected by genotype (Kruskal–Wallis test (P < 0.0001) with post hoc Dunn’s multiple comparisons tests (letters); medians with 95% confidence intervals), and was higher for QAH than for all other genotypes (P < 0.05), except for LAN, which showed higher bang sensitivity than LSN (P = 0.0134). Further information on experimental design and statistical test results can be found in the Source Data.

The full paper makes comments about the evolutionary pathway by which the TSI, Target substance insensitivity, to cardiac glycosides evolved.

(The old drug digitalis and related digoxin fit into this class of compounds.)

Some concluding remarks:

Substitutions at three amino acid sites in ATPα are sufficient together, but not alone, to explain the evolution of resistance and TSI to cardiac glycosides achieved by the monarch butterfly at organismal, physiological and biochemical levels. The adaptive walk follows theoretical predictions on the length of such walks2,3,4,13,14, involves epistasis13,14,20,22, and minimizes pleiotropic fitness costs3,4,13,14,21, and variations of it convergently re-appeared across lineages that diverged more than three hundred million years ago7,8,9,10,11. Genome editing technology facilitates functional tests of adaptation across levels of biological organization5,25,26. Although mutational paths to adaptive peaks have been identified in microorganisms2,3,4,13,14,22, this is, to our knowledge, the first in vivo validation of a multi-step adaptive walk in a multicellular organism, and illustrates how complex organismal traits can evolve by following simple rules.

This technology is very powerful.

Like all powerful technologies, it has a potential to do good and great things, and likewise, bad and terrible things. The choice is moral. As old and as cynical as I am, I still believe, in spite of it all, in the capacity for humanity to come down on the side of good and great.

Have a nice day tomorrow.

Origin of an Upbeat Phrase in Dark Times: "We Cannot Predict the Future, But We Can Invent It."

I thought it attributed to Lincoln.

It's not, apparently:

The Quote Investigator, Investigates

Nevertheless, in these times, with our democracy in such danger, the thought somehow thrills me.

I hope the young people live this way.

Go Millennials! Take the World! Do better than us! You can't possibly do worse!

An Economic, Environmental, and Technical Analysis of Biomass Sourced Jet Fuel.

The paper I'll discuss in this post is this one: Comprehensive Life Cycle Evaluation of Jet Fuel from Biomass Gasification and Fischer–Tropsch Synthesis Based on Environmental and Economic Performances (Xiao et al, Ind. Eng. Chem. Res. 2019, 58, 19179−19188)

I have very little use for Bill McKibben of 350.org because although he "cares" loudly about climate change, he is nothing more than a journalist, and a cowardly one at that, since it is increasingly obvious that his prescribed solution, so called "renewable energy" has clearly not worked, and is not working and won't work. McKibben is a journalist. I often joke that one can only get a degree in journalism these days if one has not passed a college level science course.

No one now living will ever see an atmospheric concentration of the dangerous fossil fuel waste carbon dioxide measuring under 400 ppm again, never mind "350." Next year I'm certain I'll be able to say - if still alive - "under 410 ppm again." The blind, and frankly ignorant faith is so called "renewable energy" is one reason why this is so. The more than 2 trillion dollars spent in the last ten years alone on this scheme has caused climate change to accelerate, not decline. We are now seeing increases at 2.4 ppm/year, an unprecedented rate.

I call McKibben a "coward" because it takes courage to say "I am wrong" or "I was wrong" and he clearly lacks this ability, since the only way to be serious about climate change is to embrace science and engineering, as opposed to driving one's Prius (or Tesla electric) car to protests chanting "We want 'renewable energy now!' and carrying signs that the bearers consider witty. Over the last several hours I've been studying lignins, a component of wood and the stalks of many plants, and as a result have been studying the environmentally dubious Kraft process for wood pulping, which is utilized to make paper for signs people can carry to their protests stating how much they care about the climate.

Bill McKibben lacks both the courage and the intellectual insight and education to be able to say the word "nuclear."

If one respects science, one considers how scientists work. We have theories or hypotheses which must be tested by experiment. If the experimental results invalidate the theory, the theory goes, not the experimental result. We don't make Trumpian scale excuses for the experimental result in order to save a precious theory, which by being precious translates into blind faith. The experimental results of the multitrillion dollar "renewable energy will save us" theory are in; climate change is accelerating, not being ameliorated. It's time for the theory to be rejected. Denial and excuses for the experimental result are meaningless. No one now living will ever see an atmospheric concentration of the dangerous fossil fuel waste carbon dioxide measuring under 400 ppm again. The so called "renewable energy" experiment did not work; it is not working; it won't work.

The purpose of this riff on McKibben, who I obviously hold in low regard, is a bit of "Gotcha," which has come to permeate our culture of anti-thinking, the age of twits posting twitter witticisms, all of which are making the world worse, not better.

To avoid "Gotcha" statements the young climate activist Greta Thunberg took a sailboat across the ocean to address the UN on climate change. She declined to fly, since flying requires the consumption of rather large amounts of fuels based on dangerous petroleum. This reminds me of a statement I heard attributed to Mahatma Gandhi in which he remarked that his advisers complained that was very expensive to be sure he was keeping his vow of poverty in place.

By the way, I have enormous respect for Greta Thunberg, because I think she is right to ask us "How DARE you?!!!" about what we in my generation have done to hers.

History will not forgive us; nor should it.

It's OK for Greta Thunberg to not know anything about engineering by the way; she's sixteen. (Bill McKibben is 58.)


In general, as I've just made clear, I am hostile to so called "renewable energy" not because its slightly better than dangerous petroleum, dangerous coal and dangerous natural gas, when it functions, but because it requires dangerous petroleum, dangerous coal and dangerous natural gas to back it up when it's not working, which is often. This is why it is not working and won't work, and why Germany and Denmark have the highest electricity prices in the OCED, because a system that requires redundancy is obviously more expensive than one that doesn't, and not only that it, it is worse from an environmental standpoint. (We hit 415 ppm of CO2 this year.)

Still, despite to my hostility to so called "renewable energy" I am flexible enough to be intrigued by what is, by far, the largest source of it, biomass. As practiced now, biomass is a health and environmental disaster: Slightly less than half of the world's 7 million air pollution deaths each year derive from it, and the Mississippi River Delta system, along with other bodies of water, has be destroyed by agricultural fertilizer run off to make corn ethanol, and the Indonesian and Malaysian rain forests are being rototilled to make biodiesel to meet German "renewable portfolio standards."

Nevertheless, biomass relatively efficiently captures carbon dioxide from the air, and this is a non-trivial task that we leave for Greta Thunberg's generation to accomplish with depleted resources and a degraded planet. Biomass, especially algae biomass, is fast growing, self replicating, and capable of covering the large surface area required to address the entropy of mixing that makes cleaning up the dangerous fossil fuel waste carbon dioxide. Thus it cannot be ignored.

This brings me to the paper at the outset. This is one way to make jet fuel so Greta Thurnberg can feel safe to fly someday, but there are others, one of my personal favorites being that proposed by the US Naval Scientist Heather Willauer , although in truth, it's less than perfectly idea since it requires an electricity intermediate and is thus thermodynamically questionable.

The best way to deal with biomass in my opinion is heat driven gasification, which what the paper cited at the opening of this post is about.

The cartoon graphic introducing the paper:

From the introduction:

With increased aviation travel and limited substitutes in this area, jet fuel demand has increased significantly. The traditional jet fuel consumes huge fossil energy and leads to serious environmental pollution. With the global warming effect, biomass, as a renewable resource to produce jet fuel, has attracted progressively more attention at the global scale. In recent years, the conversion routes of jet fuel derived from biomass mainly include catalytic cracking-olefin oligomerization, hydroprocessed esters, and fatty acids, Fischer–Tropsch (FT) synthesis, hydrothermal liquefaction, and fermentation alcohol synthesis.(1−10) However, the environment, resource, and economic performances of biomass-based jet fuel need to be evaluated and compared for seeking beneficial technical pathways.

The life cycle assessment (LCA) is a method for evaluating the environmental impact of a product throughout its life cycle. In order to compare the influence of different processes of biomass-based jet fuel on the environment and resources, some literature studies carried out a variety of life cycle evaluations of the abovementioned conversion processes. These studies mainly focused on the contribution of biomass-based liquid fuel to mitigate the greenhouse effect. Moreover, some comprehensive evaluations were based on the fuzzy mathematics method, such as the analytic hierarchy process (AHP).
Several researchers(7−9) performed the LCA of biomass-based jet fuel derived from hydrothermal liquefaction (HTL) of microalgae. Two HTL processes of algal jet fuel based on the different circumstances were analyzed, and Monte Carlo simulation and sensitivity analysis were completed. The results showed that the transportation of microalgae led to the increase in the life cycle climate change impacts, and compared to the process of petroleum-based jet fuel, greenhouse gas emissions could be reduced by 76.0% based on the optimized process of algal jet fuel.

Klein et al.(3,4) compared different routes for renewable jet fuel (RJF) production integrated with sugarcane biorefineries in Brazil based on the technoeconomic and environmental assessments. They concluded that hydroprocessed esters and fatty acids exhibited the highest production potential and FT synthesis showed the best economic performance among the studied scenarios of RJF. Moreover, all conversion technologies of RJF could reduce greenhouse gas emissions by more than 70% compared to the process of petroleum-based jet fuel...(10)

...Moreover, many researchers have integrated the AHP into LCA to evaluate the comprehensive performance of products.(14−16) Tao et al.(6) obtained a resource-environment-economic comprehensive performance evaluation model of biomass-based jet fuel from biomass gasification and FT synthesis based on AHP. They showed that the case of biomass-based jet fuel combined with waste heat for power generation exhibited a lower environmental impact than that combined with heat supply directly and the reduction of environmental impact indicators was in the range of 11.7–40.8%. Compared to petroleum-based jet fuel, the global warming potential (GWP) of biomass-based jet fuel reduced by 52.6–71.9% and the nonrenewable resource consumption reduced by 84.4–93.6%. Different environmental impact distribution methods, such as based on economic value distribution, energy distribution, and mass distribution, used in the biomass growth stage led to significant changes in the environmental evaluation, in particular, for GWP and eutrophication potential (EP). It could also be found that the comprehensive performance of biomass-based jet fuel is the most sensitive to feedstock consumption...

...The method of monetization is more objective and rational, which has the same criteria for weighting economic performance, resource performance, and environmental performance. Therefore, the comprehensive evaluation obtained is fairer to the entire society, and its decision-making meaning is more perfect. This study not only employed the monetization method to reflect economic benefits but also completed the comprehensive analysis through the monetization method on resource and environment, to avoid the subjective factors in comprehensive evaluation.

Some graphics from the text beginning with a process flow sheet diagram:

Figure 1. Process of jet fuel from biomass gasification and Fischer–Tropsch synthesis

It is important to note that this analysis relies of combustion heat, and not nuclear heat, and therefore can be improved upon. Specifically in this diagram the heat is generated by the combustion of biomass, reducing the amount that can be recovered as a biofuel. However I very much like the FT approach and the heat exchange networks.

Two cases are considered:
Considering petroleum-based jet fuel as a reference, the performances of economy, resource, and environment were reflected by relative economic benefits (REBs), nonrenewable resources saving benefits (NRSBs), and pollution mitigation benefits (PMBs). These indicators are defined in the subsequent sections. Each alkane mixture is separated by distillation, and then the final product jet fuel (C8−C16), gasoline (C5−C7), and diesel (C17−C20) are obtained, in addition to by-product wax. The steam generated by the waste Figure 1. Process of jet fuel from biomass gasification and Fischer−Tropsch synthesis. heat is supplied for two cases, that is, heat directly (Bio-FTJ-1) and power generation (Bio-FTJ-2) cases.

Here is the grounds for the LCA analysis, note the presence of fertilizers and pesticides. These may not be necessary if the water utilized to grow the biomass is municipal waste water, or agricultural run-off water, since these are potential media for algae growth. The big problem with Algae growth is dewatering and transfer, both of which can be addressed to improve the process, dewatering by the use of waste heat, transport by direct flow into reactors. (This would also have the added advantage of recovering phosphorous, the depletion of which is another very, very, very, very serious matter we are dumping, with contempt, on Greta Thunberg's generation. How DARE we?)

Figure 2. Scope of LCA of Bio-FTJ systems.

An issue often ignored is the material costs of so called "renewable energy," which calls into question how "renewable" it is - this is a serious paper, not hand waving - is not not ignored here:

Table 2: from the paper:

Costs of this process, again analyzed in the absence of nuclear heat:

It is important to note that in the case of dangerous petroleum fuels, the economic costs of the destruction it causes, the costs of deaths and cost of disease from air pollution, and the cost of climate change - i.e. "external costs" - are not included. If they were, petroleum would be too expensive to use, inspiring idiots like Jim Kunstler to carry on how about we'll all die without oil, that "peak oil" nonsense. These external costs are not included although, they should be in an LCA paper in the analysis of the cost of petroleum jet fuel in table 4. I do not mean to criticize the authors or their fine work here, but they are buying into the fact that we blindly accept these enormous dangerous fossil fuel costs by habit while we all wait breathlessly for the grand renewable nirvana that never comes, and not because it is morally or intellectually justifiable.

Table 4:

For the next few graphics, there is a parameter called "ICP" for Indicator of Comprehensive Performance. There are also parameters associated with the weighting of these indicators, described in the text as follows:

Considering petroleum-based jet fuel as a reference, the performances of economy, resource, and environment were reflected by relative economic benefits (REBs), nonrenewable resources saving benefits (NRSBs), and pollution mitigation benefits (PMBs). These indicators are defined in the subsequent sections.

The weighting factors utilized in the analysis of these are assigned in the graphic below, where the weighting factors are described thusly:

α, β, and γ represent the weighting coefficients of REB, NRSB, and PMB, respectively.

Figure 3. ICP with different weighting coefficients.

The next graphic on the sensitivity of benefits to the price of oil depends on the dubious assumption with which we all live that dangerous fossil fuels are allowed to dump the dangerous fossil fuel waste without charge.

Figure 4. Sensitivity of ICP to different prices.

A genuflection to this fact that dangerous fossil fuel wastes can be dumped without charges accruing to users and dangerous fossil fuel companies.

Figure 5. Sensitivity of ICP to resource consumption and pollutant emission.

Figure 6. Influence of into-factory price of stalks on performance.

The next graphic refers to the price of stalks delivered to the plant; this is not an algae based process.

Figure 7. Influence of stalk consumption on performance.

And the final figure refers to the influence of the cost of oil, which is subsidized by lung tissue, the destruction of habitats, and the destruction of the future of Greta Thunberg's generation and all generations after hers.

Figure 8. Influence of the price of crude oil on REB.

Some conclusions to the paper:

Compared to Bio-FTJ-1, Bio-FTJ-2 can achieve greater benefits in saving nonrenewable resource and can emit less CO2 and other pollutants because it significantly reduces the consumption of external power input. However, owing to the high production cost of Bio-FTJ-2, its economic benefit is very low. Therefore, ICP of Bio-FTJ-2 is lower than that of Bio- FTJ-1.

According to the sensitivity analysis, the comprehensive performance of the two processes is highly sensitive to the price of crude oil and stalk consumption and the Bio-FTJ-1 is highly sensitive to electricity consumption. The higher the price of crude oil is, the better the comprehensive performance of the Bio-FTJ is. The results of this study indicate that the comprehensive performance of Bio-FTJ can be improved significantly by the reduction of the consumption of stalks and external power input in the production.

I trust you're having a nice day.

Photochemical Reduction of the Soluble Radioactive Pertechnate Ion to Insoluble TcO2.

The paper I'll discuss in this post is this one: Efficient Photocatalytic Reduction of Aqueous Perrhenate and Pertechnetate (Shi et al, Environ. Sci. Technol. 2019, 53, 18, 10917-10925)

Technetium is a synthetic element - the element in the periodic table with the lowest atomic number for which no stable isotopes exist - that is often regarded as so called "nuclear waste," something which is true in the paper I'm about to discuss. (I personally argue that there is no such thing as "nuclear waste" in the absence of stupidity, fear and ignorance, but that's my opinion. Fear and ignorance are far more popular and far more powerful than any of my opinions will ever be.)

The most common use for technetium is in medicine, a short lived nuclear isomer Tc-99m is the workhorse of medical imaging as well as some treatment modalities. It decays to the same isotope as is found in used nuclear fuel, Tc-99. People who have undergone medical testing and medical treatment with Tc-99m generally piss the resultant Tc-99 decay product away, because in general, it is in the form of the highly soluble TcO4- anion, known as the pertechnetate ion. In addition, unlike other soluble radioactive fission products such as isotopes of cesium and strontium (although strontium sulfate and carbonate are insoluble its nitrate is quite soluble) the pertechnetate ion has a fairly low affinity for adhesion to minerals. It migrates quite readily.

Historically fission product technetium from commercial nuclear reprocessing has been dumped into the ocean. This was true at both Sellafield in the UK and at La Hague in France, which is unfortunate, not because there is an incredible risk to the environment because of this practice, but because the potentially valuable element was not recovered.

Technetium metal has many interesting properties, both as a surrogate or potential replacement for the relatively rare and expensive element rhenium which is essential to modern technology. In other ways in which it is actually superior to rhenium, for example in dehydrogenation reactions for alcohols, chemistry which conceivably might play a role in the elimination of the mining of dangerous petroleum - with all the observed tragedy that represents - for the production of polymers: (cf. Theoretical design of a technetium-like alloy and its catalytic properties Koyama and Xie, Chem. Sci., 2019, 10, 5461-5469. The authors of this paper claim, without much justification, that technetium is "too dangerous" to use and therefore attempt to duplicate its electronic structure by alloying other metals.)

The pertechnetate ion is an excellent corrosion inhibitor, and personally I have been extremely interested in technetium alloys, some of which have extremely valuable properties. The hardness of technetium tetraboride is exceeded only by its rhenium analogue.

I'm not necessarily a big fan of nitric acid dissolution of used nuclear fuels - I think there are better approaches to performing this essential task - but the reality is that this has historically and is probably still the most prevalent way the valuable materials in them are recovered. In nitric acid type dissolutions, the chemical form of technetium is generally the pertechnetate ion. This is, for example, how it is found in the Hanford tanks that dumb anti-nukes always carry on about, even though they are spectacularly disinterested in the 7 million air pollution deaths that occur each year because we don't have more technetium.

The recovery of technetium for the exploitation of its many useful properties, now that it is available to humanity, will therefore require facile methods for its removal from aqueous solutions of pertechnetate, which is why this paper caught my eye.

From the introduction, covering some of what I've just said and some things I didn't say:

Technetium-99 (99Tc), a β-emitting isotope (β–max = 293.7 keV), is generated from thermal-neutron-induced fission of uranium-235 (235U) and spontaneous fission of 238U in the earth’s crust.(1,2)99Tc is also formed from the decay of the medical radioisotope 99mTc with a half-life of only 6.0 h.(3) The most common chemical form is pertechnetate 99TcO4–, which is of particular environmental concern due to the long half-life of 99Tc (2.13 × 105 years)(1) and the resistance to adsorption on mineral surfaces and sediments that results in migration with potential ecosystem risks.(4−7)

Because all technetium isotopes species are radioactive, research progress is challenging. As a result, rhenium (Re) is often used as a nonradioactive chemical analogue of 99Tc.(8−11) One of the various methods used for removal of 99TcO4–/ReO4– from aqueous solution is conventional solvent extraction.(12,13) Nevertheless, there remain shortcomings, such as utilization of large amounts of toxic and volatile organic reagents, resulting in production of secondary wastes. Alternative ion exchange methods(14−16) require high quality of raw liquid to avoid column blockage. Despite a recent breakthrough toward TcO4– elimination via molecular recognition,(17) long-term storage stability of Tc-containing materials requires further attention, and large-scale practical applications have not been demonstrated.(18) Solid waste forms for 99Tc immobilization include metals such as Tc-Zr alloys(19) and borosilicate glasses.(20) Disadvantages of the latter are oxidation and release of volatile Tc molecules during high-temperature vitrification.(1)

An appealing method to immobilize 99Tc is reduction of soluble Tc(VII) to sparingly soluble Tc(IV) with removal from aqueous solution as 99TcO2·nH2O species,(8,21) which can be separated by physical filtration and then converted to metal or other waste forms for long-term disposal.(19,20)

Common reducing agents such as SO32–, Sn2+, Fe2+,(9,22,23) and biomass(24,25) are exhausted in one cycle and not readily reused. Using Fe(0)/Fe(II) as the reductant couple, 99Tc/Re was sequestrated using a simultaneous adsorption–reduction strategy.(21,26−28) Electrochemical methods(29−31) involve toxic chemicals, and furthermore, the presence of SO42– suppressed Re(VII) reduction in aqueous solution. Although γ-radiation-induced reduction(32) via hydrated electrons might efficiently reduce and separate Re(VII), the conditions are impractical. Photochemical-induced reduction(31,32) of Re(VII) using broadband UV or laser irradiation over 6 h afforded 94.7% recovery of Re; unfortunately, the high molar absorptivity of Re(VII) limits the practical concentration of Re(VII).

Heterogeneous semiconductor-based photocatalytic reduction of heavy metal ions such as Cu2+, Hg2+, Ag+, U(VI), and Cr(VI)(33−37) has been proposed. Many photocatalysts are regarded as environmentally friendly materials because of their chemical inertness and biological compatibility in natural systems. For example, titanium dioxide (TiO2) is a good prospect for photocatalytic reduction and removal of metal ions due to its high resistance to photocorrosion, nontoxicity,(38) low environmental pollution, regeneration ability, low cost, and convenient operations.(38,39) Evans et al.(40) reported selective removal (98%) of uranium from waste liquid containing strong complexing agents using TiO2 as a photocatalyst. Wang et al.(41) prepared a TiO2/g-C3N4 heterojunction composite that facilitated rapid separation and transfer of photogenerated electrons, thus achieving efficient reduction and fixation of uranium...

...The objective of this study was to provide fundamental understanding of photocatalytic 99Tc/Re reduction and removal using TiO2 nanoparticles in the presence of HCOOH. Most of this work was still conducted using nonradioactive ReO4– as a surrogate for 99TcO4–.(8,42) Anyway, the reported 99Tc(VII/IV) redox potential (E0 = +0.74 V) is somewhat more positive than that for Re(VII/IV) (E0 = +0.51 V), which means that photocatalytic reduction of Tc(VII) should be more energetically favorable. In addition, the reduction/removal mechanism was elucidated by photoelectrochemical measurements, electron paramagnetic resonance spectroscopy, X-ray photoelectron spectroscopy, and X-ray absorption spectroscopy. These results suggest an environmentally friendly photocatalytic approach for 99TcO4–/ReO4– removal and sequestration from aqueous solution.

Titanium dioxide is a very cool photocatalyst in general, love it!

The experimental light source here is in the UV range, 320 nm, which means we can't apply in a verified way the magic word on which we've bet the planetary atmosphere with poor results, "solar" although the authors are happy to apply this word, although the experiments, using a xenon lamp, were UVa radiation.

UV radiation is continuously available by downrating X-rays and gamma rays from fission products by the use of barium fluoride, so this should not represent much of a problem in a putative reprocessing industrial plant.

Most of the work was performed using a rhenium surrogate for technetium, although ultimately technetium was directly utilized:

Tc Removal

99Tc was obtained as a 2% HNO3 stock solution of potassium pertechnetate (KTcO4) from China Institute of Atomic Energy. The 99Tc experiments were performed in a special radiological laboratory. In accordance with the above experimental protocol for Re, the corresponding 99TcO4– solution was illuminated for 150 min under the identified optimal Re(VII) reduction/removal conditions. Residual concentration of 99Tc was analyzed by a liquid scintillation counter (Tri-Carb, PerkinElmer). Aliquots of 0.5 mL were periodically collected during light irradiation and filtered through 0.2 μm Millipore membranes before analysis. 0.2 mL of the filtrate was then mixed with 5 mL of liquid scintillation cocktail (ULTIMA Gold, PerkinElmer) and held in a 6 mL plastic scintillation vial for measurements. The reacted suspension was stirred in air to observe the reoxidation and release of reduced Tc.

Some pictures from the text:

Figure 1. (A) Removal of Re(VII), for no TiO2 and 0.4 g L–1 TiO2 in different conditions; pH = 3, [HCOOH] = 1%, [Re(VII)] = 5 mg L–1. Removal of Re(VII) for different concentrations of HCOOH, for (B) no light and (C) UV–visible irradiation; pH = 2, [Re(VII)] = 10 mg L–1. (D) Removal of Re(VII) with different organic additives under light irradiation; pH = 3, [organic additive] = 1%, [Re(VII)] = 5 mg L–1. V = 50 mL, T = 298 K throughout.

Figure 2. (A) First-derivative EPR spectra of DMPO spin adducts. In the dark: TiO2, HCOOH, and TiO2/HCOOH/Re(VII); under light: TiO2, HCOOH, TiO2/HCOOH, and TiO2/HCOOH/Re(VII). (B) TiO2 current–potential measurements: (black ? Idark with 0.1 mol L–1 Na2SO4 + 0.1% HCOOH + 5 mg L–1 Re(VII); (blue ? Iphoto with 0.1 mol L–1 Na2SO4; (red ? Iphoto with 0.1 mol L–1 Na2SO4 + 0.1% HCOOH; (green ? Idark with 0.1 mol L–1 Na2SO4 + 0.1% HCOOH + 5 mg L–1 Re(VII).

Figure 3. Time profiles of Re(VII) reduction during the irradiation of TiO2 suspensions with N2 bubbling, V = 50 mL, T = 298 K. (A) Various dosages of TiO2, [HCOOH] = 1%, [Re(VII)] = 10 mg L–1, pH = 2. (B) Effects of initial Re(VII) concentration, [HCOOH] = 1%, 0.2 g L–1 TiO2, pH = 2. (C) Influence of NO3– concentration, [HCOOH] = 1%, [Re(VII)] = 10 mg L–1, 0.2 g L–1 TiO2, pH = 2. (D) Solution pH values, [HCOOH] = 0.2%, [Re(VII)] = 10 mg L–1, 0.4 g L–1 TiO2.

Figure 4. (A) Cycling runs of TiO2 for photocatalytic reduction of Re(VII). Time profiles of Re(VII) reduction during the irradiation of 0.4 g L–1 TiO2 suspensions at pH = 3, with N2 bubbling, [HCOOH] = 1%, [Re(VII)] = 5 mg L–1, V = 50 mL, T = 298 K. (B) Color change of both solid and solution before and after photocatalysis.

Figure 5. Time profiles of 99Tc(VII) and Re(VII) reduction during the irradiation of 0.4 g L–1 aqueous TiO2 suspensions at pH = 3, with N2 bubbling, [HCOOH] = 1%, [99Tc(VII)] or [Re(VII)] = 0.05 mmol L–1, [NO3–] = 20 mmol L–1, V = 50 mL, T = 298 K.

I'm not convinced this process is necessarily worthy of industrialization. The text suggests that nitrate is a problem.

I think it's time to move past the workhorse Purex type solvent extraction process and there are many other approaches to the recovery of technetium for use, but one can imagine this process being of some utility in some places, for example, in extant situations where pertechnetate is migrating in the environment.

I trust you're having a nice afternoon.

How evolution builds genes from scratch.

The news item I'll discuss in this post is this one: How evolution builds genes from scratch

I don't think I logged into Nature when I saw it, so I think it's open sourced.

A lot of my day to day work is involved in proteomics either directly or indirectly. I am therefore often required to think about protein isoforms, many of which arise from genetic differences in people and related organisms; there is little more fascinating than seeing those forms highly conserved throughout evolution in comparison to variable, and indeed, vestigial proteins and sequences.

A surprise of the discovery of automated gene sequencing that led to the result of the human genome sequence, as the subsequent gene mapping of many other species is how much "junk DNA" there is, some of which is artifacts of ancient viral infections in ancestors or ancestral organisms.

This news article suggests that new genes can sometimes arise from turning on "junk DNA."

Some excerpts:

PDF version
5-inch Arctic cod in hollows of ice floes in the Arctic Ocean
Some cod species have a newly minted gene involved in preventing freezing.Credit: Paul Nicklen/NG Image Collection

In the depths of winter, water temperatures in the ice-covered Arctic Ocean can sink below zero. That’s cold enough to freeze many fish, but the conditions don’t trouble the cod. A protein in its blood and tissues binds to tiny ice crystals and stops them from growing.

Where codfish got this talent was a puzzle that evolutionary biologist Helle Tessand Baalsrud wanted to solve. She and her team at the University of Oslo searched the genomes of the Atlantic cod (Gadus morhua) and several of its closest relatives, thinking they would track down the cousins of the antifreeze gene. None showed up. Baalsrud, who at the time was a new parent, worried that her lack of sleep was causing her to miss something obvious.

But then she stumbled on studies suggesting that genes do not always evolve from existing ones, as biologists long supposed. Instead, some are fashioned from desolate stretches of the genome that do not code for any functional molecules. When she looked back at the fish genomes, she saw hints this might be the case: the antifreeze protein — essential to the cod’s survival — had seemingly been built from scratch1.

The cod is in good company. In the past five years, researchers have found numerous signs of these newly minted ‘de novo’ genes in every lineage they have surveyed. These include model organisms such as fruit flies and mice, important crop plants and humans; some of the genes are expressed in brain and testicular tissue, others in various cancers...

...Back in the 1970s, geneticists saw evolution as a rather conservative process. When Susumu Ohno laid out the hypothesis that most genes evolved through duplication2, he wrote that “In a strict sense, nothing in evolution is created de novo. Each new gene must have arisen from an already existing gene.”

Gene duplication occurs when errors in the DNA-replication process produce multiple instances of a gene. Over generations, the versions accrue mutations and diverge, so that they eventually encode different molecules, each with their own function. Since the 1970s, researchers have found a raft of other examples of how evolution tinkers with genes — existing genes can be broken up or ‘laterally transferred’ between species. All these processes have something in common: their main ingredient is existing code from a well-oiled molecular machine...

...But genomes contain much more than just genes: in fact, only a few per cent of the human genome, for example, actually encodes genes. Alongside are substantial stretches of DNA — often labelled ‘junk DNA’ — that seem to lack any function. Some of these stretches share features with protein-coding genes without actually being genes themselves: for instance, they are littered with three-letter codons that could, in theory, tell the cell to translate the code into a protein.

It wasn’t until the twenty-first century that scientists began to see hints that non-coding sections of DNA could lead to new functional codes for proteins. As genetic sequencing advanced to the point that researchers could compare entire genomes of close relatives, they began to find evidence that genes could disappear rather quickly during evolution...

...Some of these genes-in-waiting, or what Carvunis and her colleagues called proto-genes, were more gene-like than others, with longer sequences and more of the instructions necessary for turning the DNA into proteins. The proto-genes could provide a fertile testing ground for evolution to convert non-coding material into true genes. “It’s like a beta launch,” suggests Aoife McLysaght, who works on molecular evolution at Trinity College Dublin...

The nice cartoon in the news article:

Interesting I think.

Go to Page: 1 2 3 Next »