Member since: Wed Oct 13, 2004, 05:42 PM
Number of posts: 4,043
Number of posts: 4,043
Posted by LongTomH | Sun Apr 14, 2013, 05:31 PM (22 replies)
It might be said that some contemporary futurists tend to use technological innovation and scientific discovery in the same way God was said to use the whirlwind against defiant Job, or Donald Rumsfeld treated the poor citizens of Iraq a decade ago. It’s all about the “shock and awe”. One glance at something like KurzweilAI.Net leaves a reader with the impression that brand new discoveries are flying off the shelf by the nanosecond and that of all our deepest sci-fi dreams are about to come true. No similar effort is made, at least that I know of, to show all the scientific and technological paths that have led into cul-de-sac, or chart all the projects packed up and put away like our childhood chemistry sets to gather dust in the attic of the human might-have- been. In exact converse to the world of political news, in technological news it’s the jetpacks that do fly we read about not the ones that never get off the ground.
Aside from the technologies themselves future oriented discussion of the potential of technologies or scientific discovery tends to come in two stripes when it comes to political and ethical concerns: we’re either on the verge of paradise or about to make Frankenstein seem like an amiable dinner guest.
There are a number of problems with this approach to science and technology, I can name more, but here are three: 1) it distorts the reality of innovation and discovery 2) it isn’t necessarily true, 3) the political and ethical questions, which are the most essential ones, are too often presented in a simplistic all- good or all-bad manner when any adult knows that most of life is like ice-cream. It tastes great and will make you fat.
Then we have the issue of reality: anyone familiar with the literature or websites of contemporary futurists is left with the impression that we live in the most innovative and scientifically productive era in history. Yet, things may not be as rosy as they might appear when we only read the headlines. At least since 2009, there has been a steady chorus of well respected technologists, scientists and academics telling us that innovation is not happening fast enough, that is that our rates of technological advancement are not merely not exceeding those found in the past, they are not even matching them. A common retort to this claim might be to club whoever said it over the head with Moore’s Law; surely,with computer speeds increasing exponentially it must be pulling everything else along. But, to pull a quote from ol’ Gershwin “ it ain’t necessarily so”.
Read the rest here: http://ieet.org/index.php/IEET/more/searle20130331
The The Institute for Ethics and Emerging Technologies can be a fun website. The problem is, that a lot of the time they're 'off in the ozone' talking about transhumanism, the technological singularity or uploading your brain into a computer (No Thank You!). Recently, they've begun running more realistic articles taking a more nuanced look at technological change and its consequences.
Posted by LongTomH | Thu Apr 11, 2013, 03:23 PM (0 replies)
Dr. K.Eric Drexler created the term nanotechnology to describe the concept of: "atomically precise machines building atomically precise products." This was before the concept was stolen by researchers who wanted to label their work as nanotechnology appropriated the term to describe "nanoscale particles, fibers, electronics, and the like." (Quotes are from Drexler's blog.)
Now, Eric Drexler's has a new book, to be released in May, and he has updated his blog, Metamodern, with a history of nanotech publications: Missing pieces: The lost history of how nanotechnology took hold in the world.
My new book, Radical Abundance, is now (at last!) nearing release. It reframes prospects for atomically precise manufacturing (APM), exploring timeless physical principles, surprising progress, and potential applications to global challenges that include economic development and climate change. Radical Abundance also looks back on the history of ideas that has shaped today’s perceptions of APM. Much of this history predates the rise of the web, however, and several key publications have been unavailable and hence effectively invisible.
To provide access to that 'lost history,' Drexler has posted PDF links for publications dating back to 1982 that introduced the concept of nanotechnology to audiences both popular and academic.
I've been following Eric Drexler's work on nanotechnology since hearing him discuss his work at a space development conference in 1986. Eric's interest in space predates his interest in nanotechnology; he worked with Dr. Gerard K.O'Neill in the 1970s, when Gerry O'Neill was first developing his space manufacturing concepts. He still holds patents on such space concepts as a high-performance light sail.
Posted by LongTomH | Mon Feb 25, 2013, 03:48 PM (3 replies)
When economic times are good, machines are celebrated as wonders of progress and prosperity that will improve our lives. But when times are tough, they become objects of fear. The unemployment crisis of the past four years was triggered by a Wall Street-driven financial crash, and exacerbated by policy makers who failed to do enough to stimulate the economy and to ensure that there’s enough demand for goods and services. But lately, a new argument for job insecurity has made a splash in the media: It’s the machines! Pundits predict the “end of labor,” and talk about armies of sleek robots taking over the workplace as a foregone conclusion. Dystopian fantasies worthy of a late-night sci-fi flick flood the airwaves.
Scary articles in the business section warn that any rise in wages will drive companies to save money by shedding workers and buying robots. Visions of increased efficiency and machines that can run 24/7 with no need for bathroom breaks have workers frantically trying to prove their value. Bosses warn that worker protests will only speed up automation. Don’t like the harsh conditions at Foxconn? Fine, a robot will do your job. The message: Keeping wages down and workers toiling until they drop is the only way to stave off a robot revolution.
The notion that technology is driving current unemployment doesn’t make much sense when you look at it closely. In 2007, there were reasonable, if not great, labor markets in the U.S. The giant leap in unemployment numbers dates from a very specific event, not from a long-run process that has been displacing workers over time. In 2007, the unemployment rate was 4.6. By 2009, it was 9.6, and remains very high. What happened wasn’t a sudden rush of robots onto the scene, but a financial catastrophe that nearly tanked the global economy.
Back in the 1990s, all kinds of technological changes were happening, as new users of the Internet will recall. Manufacturing productivity and some parts of service productivity went way up. People weren’t paranoid about machines because the economy was humming along. Technology was making humans more productive, the pundits said.
Read more here: http://www.alternet.org/economy/obsolete-humans-why-elites-want-you-fear-robot
Posted by LongTomH | Sun Feb 10, 2013, 02:10 PM (10 replies)
There have been a number of recent posts on the net warning of an imminent test of an anti-satellite weapon by China. One of these is by the Union of Concerned Scientists: Is January Chinese ASAT Test Month?
n 2007 and 2010 China conducted anti-satellite (ASAT) weapons tests, both on January 11. Rumors circulating for the past few months suggest that some within the U.S. defense and intelligence community believe China is preparing to conduct another ASAT test.
The first media report on these rumors appeared in October. China’s Ministry of Defense challenged the information in that report, but in November contacts in China told us an announcement about an upcoming ASAT test was circulated within the Chinese government. We were unable to find a public statement confirming plans for a test in the Chinese media or on publicly accessible Chinese government websites. Then, just before Christmas, a high-ranking U.S. defense official told us that the Obama administration was very concerned about an imminent Chinese ASAT test.
The Obama administration has three choices: it can make a quiet diplomatic effort to persuade China to cancel or at least postpone the test, it can publicly call on China not to test, or it can remain silent until China conducts the test and then complain about it afterwards. The Bush administration took the latter approach and the space environment is much worse off for it. Despite having seen the ASAT system tested at least twice before the Jan. 11 2007 destruction of the Fengyun 1C, the Bush administration did not communicate its concerns to China, and we will never know if this might have influenced China’s decision.
The Obama administration should try to dissuade China from conducting the test. China may decide to test anyway, but it might see value in canceling or postponing the test to discuss these issues with the U.S. The Chinese Foreign Ministry routinely expresses support for diplomatic efforts to create an international space security framework. This approach is also in line with U.S. Defense Department policy. Its Oct. 2012 Directive on Space Policy, which lays out the range of approaches the DOD will take to mitigate the threat posed by the development of systems that can interfere with satellites, says it will “support the development of international norms of responsible behavior” in space. Acting to prevent irresponsible behavior before it happens is a clearly preferable approach to supporting international norms than waiting to act until after they have been violated.
High-level intervention in both countries is needed to stop the test and start discussions. Remarkably, there are no regular channels of communication on space issues between China and the United States. Congressional opposition to scientific and commercial cooperation with China in space shut down potential talks on human space flight that could have led to a bilateral dialog on space security.
The US does have an interest in preventing China from testing another ASAT weapon. Debris from the 2007 test threatened the International Space Station. The image below shows the orbit of the ISS as a blue line and the orbits of the debris from the 2007 test in red:
There's another level to this: China's space program is advancing rapidly (See the Dragon Space page at SpaceDaily.com for updates.). If the two countries continue to take a confrontational approach, that could mean a new arms race in space, which would of course, benefit the Military-Industrial-Complex. On the other hand, cooperation could make extended human missions beyond Earth orbit easier.
Posted by LongTomH | Tue Jan 8, 2013, 04:24 PM (0 replies)
Start with Douglas Hofstadter's deconstruction of the Star Trek Transporter: His analysis of the transporter is, that it essentially destroys the person who steps into one end and builds a duplicate at the other end. That person thinks he is the original, he has all the original's memories.
But, to carry the thought experiment a step further: Imagine a transporter that doesn't destroy the original; but still transmits the information to build a duplicate. Now you have two people, each of whom believes he is the original.
Hofstadter's premise is that, in the first example, the person being 'transported' dies. Assume you're the person in the transporter beam: the lights go out, your experience ends. Another person starts living who thinks he's you; but, that's literally cold comfort, since you, the experienced you, no longer exists.
I used this analogy to argue against the idea of uploading at a function for the Foresight Institute, which was originally formed for discussion Eric Drexler's ideas on nanotechnology. I argued with a group of 'singulatarians' who very much into the idea. I used Hofstadter's analogy of the transporter to explain that, as far as I can see: If I upload my memories into a computer, that just creates a virtual model of me. If the process destroys the original, then the lights go out for me, fade to black, I'm dead. If the process doesn't destroy the original, then the original 'me' is still 'me,' no matter what the 'other guy' thinks.
To quote Robert A. Heinlein: "I know who I am; but, who are all you zombies?"
As for the idea of a singularity, count me among the skeptics. There are a number of other people skeptical of the idea, including my favorite science fiction writer: Kim Stanley Robinson. In an interview with Wired magazine, Robinson took on the idea of the singularity, among other possibilities for the future:
Robinson: I think it’s a misunderstanding of the brain and of computers, in effect. We are underestimating how complex the brain is and how little we understand it, and we’re overestimating how much computers might have a will or intention. I think the intention will always stay with us, and the machines will be search engines and adding machines — enormously powerful and fast binary, digital things — but they’re not going to do the singularity as I understand it, this notion that machines will take off on their own and leave us behind.
I think it’s some of this what I call MIT-style public relations “futurology,” which is just lame science fiction, where people are asserting that it’s really going to come true. And as a science fiction writer, I find that a little bit offensive, because nobody knows what’s really going to come true, and people who declare it is are instantly putting themselves in the fraud category. They’re claiming more than they can.
Now, to come back to the singularity, I think what’s useful in it is the idea of it as a metaphor; it’s a science fiction metaphor, and even if it will never come true in a literal sense, it might be a good way of talking about the way things feel already. So that I’ve been saying, “Yeah, the singularity, if it ever is going to happen, it actually happened back in 2008, with the financial crash.” Because what happened there, nobody quite understands, and it was a really super-complex system that involves computers, algorithms, laws, habits and traditions, and all of them combined on a global financial system that no one person understood or controlled. So that’s almost like the singularity. Our financial system has actually blown up in our face, and none of us understand it, and yet it does control the world.
If you read much Kim Stanley Robinson, you'll soon see that social justice, as well as ecological themes are major themes in his work. This continues in his latest work 2312, set in a future where human beings are spread across the solar system. The economic system for the autonomous space colonies is called The Mondragon Accord:
Wired: You call this system “the Mondragon Accord.” Is that based on something real?
Robinson: Yes, in the Basque part of Spain there’s a town called Mondragon that runs as a system of nested co-ops — including the bank, which is simply a credit union owned by everybody. So it’s a town of only 50 to 100,000 and they’re all Basques — more or less — and they don’t intend to leave the city, so there are reasons why capitalist economists want to say that it can’t possibly work for all the rest of us, but I’m not so sure. And what I wanted to do is scale it up, and show a Mondragon-style system working amongst all the space colonies in one giant collective of cooperatives.
Read the rest of the interview here: http://www.wired.com/underwire/2012/06/geeks-guide-kim-stanley-robinson/all/
Posted by LongTomH | Tue Nov 27, 2012, 11:48 PM (1 replies)
Posted by LongTomH | Fri Nov 16, 2012, 07:24 PM (29 replies)
From the Smithsonian Institute's Air & Space mag online: Is SpaceX changing the rocket equation?
The saga of entrepreneur Elon Musk's attempt to bring down the cost of putting stuff into space actually started with his long-term ambition: Making human beings a multi-planet species. Musk wanted to put a small greenhouse with some seeds and plant-food gel on the surface of Mars. He found contractors who would build a lander for a reasonable cost; but, the cost of launching it to the red planet was prohibitive, whether he was talking to US rocket companies or the Russians.
So, in 2002: "....... enlisting a handful of veteran space engineers, Musk formed Space Exploration Technologies, or SpaceX, with two staggeringly ambitious goals: To make spaceflight routine and affordable, and to make humans a multi-planet species." The key to all this is holding down the cost of launches:
But what really sets SpaceX apart, and has made it a magnet for controversy, are its prices: As advertised on the company’s Web site, a Falcon 9 launch costs an average of $57 million, which works out to less than $2,500 per pound to orbit. That’s significantly less than what other U.S. launch companies typically charge, and even the manufacturer of China’s low-cost Long March rocket (which the U.S. has banned importing) says it cannot beat SpaceX’s pricing. By 2014, the company’s next rocket, the Falcon Heavy, aims to lower the cost to $1,000 per pound. And Musk insists that’s just the beginning. “Our performance will increase and our prices will decline over time,” he writes on SpaceX’s Web site, “as is the case with every other technology.”
Bringing down the cost required a departure from the usual big aerospace way of doing things:
......prices are expected to rise significantly in the next few years, according to defense department officials. Why? Musk says a lot of the answer is in the government’s traditional “cost-plus” contracting system, which ensures that manufacturers make a profit even if they exceed their advertised prices. “If you were sitting at an executive meeting at Boeing and Lockheed and you came up with some brilliant idea to reduce the cost of Atlas or Delta, you’d be fired,” he says. “Because you’ve got to go report to your shareholders why you made less money. So their incentive is to maximize the cost of a vehicle, right up to the threshold of cancellation.”
SpaceX's design philosophy emphasized both innovation and simplicity in design, like the decision to use the same low-cost Merlin engines in all stages of their vehicles. Another secret is an organizational style at odds with traditional aerospace:
But as for SpaceX’s organizational style, it’s Silicon Valley, not NASA, that had the most influence. In Hawthorne, where everyone including Musk works in cubicles instead of offices to encourage communication, the buzzwords of the business culture—lean manufacturing, vertical integration, flat management—are real and fundamental. Says former SpaceX business development director Max Vozoff, “This really is the greatest innovation of SpaceX: It’s bringing the standard practices of every other industry to space.” Having almost all of SpaceX’s engineers under one roof means the process of designing, testing, and improving is greatly streamlined. One NASA manager who visited SpaceX quips that when there is a new problem to solve, “it looks like a flash mob” in the hallway.
I got a look inside the traditional NASA/big aerospace way of doing things on a field trip to the Marshall Space Flight Center at Huntsville, AL during an International Space Development Conference back in the 90s. One of our guides talked about problems with getting International Space Station contractors together for a meeting. You see, to build support for the ISS, congressional supporters had to provide contracts, and therefore jobs, in the home districts of as many supporters as possible. Which meant that NASA had to rent a large auditorium or even a stadium, for a meeting of contractors.
Add in the fact that decisions on design were often made to provide contracts to companies with powerful supporters in Congress, rather than for engineering reasons. Why do you think solid rockets were chosen for the space shuttle boosters despite their safety hazards? Some NASA engineers resigned when they learned that solids were to be used on a crew-carrying vehicle. Read Richard Feynman's comments in the appendix of the Rogers Commission on the Challenger Disaster, especially the paragraphs on solid rockets. Supporters of the solid-fuel rocket company Morton-Thiokol (now ATK Launch System Group) were able to influence NASA to use solids in return for their support for the space shuttle, which was in danger of cancellation several times during the 1970s.
Morton-Thiokol/ATK's supporters were able to successfully resist attempts to replace the SRB's with liquid-fuel boosters after the Challenger tragedy. That same group of powerful congressmen are the major reason that every launcher concept proposed by NASA has used or even been based on a variant of the shuttle's SRB's. That includes the cancelled Ares rocket based on a 'single-stick' version of the shuttle SRB's. A number of aerospace commentators have said that, if the Ares was carrying a crewed Orion space capsule, the crew would have little chance of survival, even with an Apollo-style launch escape system.
All the above, and more, are why a number of space program supporters, including progressives like myself, were happy when President Obama decided to rely on private launch companies like SpaceX and Orbital Sciences Corporation for International Space Station resupply. I would also like to see SpaceX and Orbital Sciences be allowed to bid on contracts for launchers and capsules for future deep-space missions to the lunar L2 point, asteroids and eventually Mars.
NASA does many things very well, as demonstrated by the Curiosity rover mission to Mars, the Kepler planet-finding space telescope and other missions; but, in developing vehicles it's been handicapped by having to work with big aerospace as well as being micro-managed by Congress.
For more information, go to SpaceX's webpage and its Facebook page.
BTW this post is partly in response to a (hopefully) friendly debate with DU colleague Bananas on why I don't support the proposed Space Launch System, and why I'm a supporter of SpaceX's Falcon 9 Heavy. SpaceX has proposed follow-on heavy-lift launchers in the Saturn V class.
Also BTW, we may not necessarily need Saturn-class heavy lift to do manned deep space exploration; but, that's another post.
Posted by LongTomH | Thu Nov 15, 2012, 05:17 PM (6 replies)
This idea of "a survivable nuclear war" has been a recurring theme since the 50s; it arose again during the Reagan / Bush I years, and I think, in a more subtle form in the Project for a New American Century document.
Along with that has been the continuing drive to gain a First Strike Capability by American hawks, always fed by the story that the Soviets were about to gain First Strike capability. One of the leaders of this was Gen. Danny Graham, member of the Team B project in the 70s and 80s. General Daniel O.Graham was also the driving force behind Strategic Defense ('Star Wars'), which still survives as Ballistic Missile Defense.
Graham and company attempted to sell this to the American public as an alternative to the McNamara's Mutual Assured Destruction doctrine. Admittedly, it was a seductive argument to the generations that had lived under the threat of nuclear annihilation for decades. To the Soviets, and to a lot of people in this country, it looked more like an attempt to gain First Strike capability over the Soviet Union.
A backstory of this was Danny Graham's attempt to co-opt the popular pro-space movement built up around the work of Dr. Gerard K. O'Neill of Princeton University. Dr. O'Neill's book: The High Frontier: Human Colonies in Space inspired the formation of a popular movement to settle the solar system. I was a member of the largest of the L-5 Society, the largest organization inspired by Gerry O'Neill's work.
Graham began co-opting Dr. O'Neill's work by naming his SDI organization High Frontier Inc.; and naming his book High Frontier.
I was one of the people writing to Graham to complain of the use of a title which had already become synonymous with Gerard K. O'Neill's space settlement concepts.. Graham's reply was that, "A book title cannot be copyrighted" (True!). Somehow, in the process, I ended up buying a copy of Graham's book and video (I did say his argument was seductive!).
A pro-Graham, pro-SDI faction led by science fiction writer, essayist and conservative activist Jerry Pournelle took over leadership of the L-5 Society, attempting to turn it into a vehicle to promote Strategic Defense. L-5 lost a major portion of its membership in the debate that followed. The greatly diminished organization merged with Wernher Von Braun's National Space Institute to form the National Space Society.
The L-5 Society had a local chapter network that was international in scope. The National Space Society soon became the PR arm of the Aerospace Industries Association. Local chapters were allowed; but their voices were muted.
This post got away from me; I really just intended to point out Gen. Graham's role in maintaining the Cold War at a high level, with a constant attempt to build a First Strike capability, which led to his advocacy of Strategic Defense. His destruction of the peaceful, pro-space movement is a subtext to this.
Someday, I need to work up a more coherent post on this. If I could ever discipline myself, it should be a book on the tension between space for peace and space for military conquest.
I still believe that settlement of space is necessary to the long-term survival of the human race.
"The universe is probably littered with the one-planet graves of cultures which made the sensible economic decision that there's no good reason to go into space--each discovered, studied, and remembered by the ones who made the irrational decision." ---XKCD
Posted by LongTomH | Thu Sep 27, 2012, 02:08 PM (0 replies)
Older DUers who were following aerospace news back in the 1980s may remember all the hype about the National Aerospace Plane, (NASP).
This project came out of the ultra-top-secret DARPA 'Copper Canyon' study. President Ronald Reagan announced the NASP program in his 1986 State of the Union Address; it was sold to the American public as the prototype for an "Orient Express" that could reach Tokyo in two hours and, as a single-stage-to-orbit replacement for the Space Shuttle.
A lot of people in the pro-space movement, including the National Space Society (of which I was a member) bought into the hype. Others were not so easily convinced: An aerospace craft that could reach near-orbital speeds (Mach 20-25) in the atmosphere would undergo incredible heating (1800-3000 deg F). Insulating tile like those on the shuttle only work when the heat load is relatively brief; insulation only slows the progress of heat to the aerospace craft's skin. In prolonged hypersonic flight, the problem would be heat soak; heat would have time to reach the skin of the craft. So, an active cooling system would be needed, along with a new generation of refractory (heat-heat) resistant materials. An active cooling system would probably be one that passed fuel (liquid hydrogen in most designs) under the skin to carry away heat.
An active cooling system would add weight to the vehicle. There was also the issue that scramjet engines, ramjets that can operate at hypersonic speeds, don't even begin to work until they're moving at about Mach 6 or greater. That meant the NASP would have to have two or three propulsion systems: One for takeoff to about Mach 3, another to work in the realm from Mach 3 to Mach 6, and the scramjet from Mach 6 upward. I might add that air-breathing engines are heavier than rockets.
The weight of the active cooling systems and the multiple propulsion systems would largely negate the advantage that airbreather systems seemed to promise.
The National Aerospace Plane project was finally terminated in 1993. A few years later, in 1996, aerospace writer G. Harry Stine announced that the National Aerospace Plane project had been a cover for a military project to develop hypersonic flight! Nothing ever flew except unmanned test vehicles, like the Waverider.
So we lost a number of years when we could have been working on a practical successor to the Space Shuttle, probably a two-stage, completely reusable vehicle.
As for hypersonic flight becoming commercially feasible in any foreseeable future, I would point to the Anglo-French Concorde. Between its first flight in 1976 and its retirement in 2003, the Concorde was a consistent money loser; it was only flown by airlines like British Airways and Air France that were subsidized by their respective governments. Even the British and France haven't been tempted to invest in a Concorde II (The US supersonic transport program was wisely terminated early in the 1970s).
The reasons for the Concorde's lack of commercial success and the reason hypersonic flight isn't likely to be commercially viable, are two fold: 1) Cost of fuel, or the laws of economics meet the laws of aerodynamics. Somewhere around Mach 1.8, the energy needed to overcome drag starts increasing rapidly. 2) Cost of maintenance: Airlines prefer a robust vehicle that doesn't break the bank in terms of maintenance labor or material cost. Supersonic and hypersonic aircraft would require exotic, expensive materials and many more man-hours of maintenance by highly skilled workers.
The reasons given above are largely the reason modern aircraft don't fly appreciably faster than the original Boeing 707 in the late 50s. Most of the design studies conducted by NASA and big aerospace are aimed at reducing fuel consumption, not achieving supersonic or hypersonic speeds.
Posted by LongTomH | Wed Aug 15, 2012, 01:55 PM (0 replies)