Wednesday, June 24, 2020

When news reporting avoids reporting the news.

From Why the World's Most Advanced Solar Plants Are Failing by Caroline Delbert.

In high school and then later at university forty years ago, I was pulled between a deep love of Egyptology (non-financially remunerative and at the height of the Cold War and with Egypt an ally of the Soviet Union, unlikely to even be feasible in any meaningful sense); an intense interest in the possibility of creating an alternative non-hydro-carbon energy model for national economic development; and a desire to be in a position to pursue a career in international business.

Egyptology went by the board first - simply not feasible.

The real sticking point was whether alternate energy was a real deal where there were real possibilities of creating material improvement. From high school, I elected to do International Economics over Electrical Engineering but even in university, I continued a heavy involvement in the alternate energy arena, doing internships, applying for grants in the field, etc.

By the end of university, I had settled on the opinion that while there was much interesting happening on the engineering front, there was no near term prospect of commercial viability. I wanted it to be true but could not amass enough credible evidence to support the belief.

I mention all this because one of the technologies that had been developed in the late sixties and seventies was concentrated solar power plants. Basically, an array of parabolic mirrors are used to superheat a dense medium to then power a generating plant producing electricity.

The downsides are obvious - they require a lot of land, it is a large scale precision engineering problem, it is intermittent (subject to weather), it is capital intense, and it has negative environment impacts (particularly bird deaths.)

By 1982 when I graduated, though, there were multiple test plants built at industrial scale. It had been shown to be engineering viable. By 2005 there was an installed capacity across the globe of 354 MW. By 2018, 5,500 MW. Nearly 100 commercial plants in operation.

BUT . . .

After forty years of development and experimentation and more than 100 plants built, they are still not commercially viable. They all still depend on pretty hefty subsidies and guaranties.

AND . . .

After forty years, there is still significant problems with both plant construction and plant operation.

From the article.
The government’s leading laboratory for renewable energy has released a new report detailing the strengths and flaws of concentrated solar energy. The National Renewable Energy Laboratory (NREL) published the report with the stated goal of using very mixed feedback on existing concentrated solar projects to create a list of suggested best practices going forward.

The NREL report “is titled CSP Best Practices, but it can be more appropriately viewed as a mix of problematic issues that have been identified, along with potential solutions or approaches to address those issues,” it begins. What’s inside includes problems shared across concentrating solar power (CSP) projects as well as general issues of large-scale construction. There are also issues with specific kinds of CSP plants based on their designs.

8 Ways to Power Your Home With Renewable Energy
Parabolic trough CSP plants use solar collectors to heat water and generate steam heat, the same as a traditional coal or even nuclear power plant. But in between is a stage called heat transfer (HTF), where a fluid medium like oil or liquid metals carries the heat from the collection area to the turbine.

The CSP report says some of the issues with these systems are the extreme and dangerous heat of the HTF and the waste hydrogen produced by these processes. Designers have also positioned elements vertically at a higher cost, when most CSPs are built in rural places with plenty of space.

The other kind of CSP plant is a tower design, where mirrors concentrate the solar power directly into a central reservoir usually made of molten salt. These plants take a very long time to come to temperature and are subject to leaks and underperformance. All of these factors mean that molten salt plants have not yet reached their performance goals or the numbers their builders have often promised locals served by these grids.

The report says these plants have often exceeded their planned operating budgets because of surprise maintenance costs as well as poor understanding of what the true operating costs will even be. NLER writes:

“There tend to be issues that are not fully considered, and it generally falls to the owner to pick up the additional costs. Some of these issues are related to obtaining and keeping quality O&M staff; lack of understanding of regional cultures; and availability and timeliness of spare parts and services.”

Even with just a few dozen CSP plants in the U.S., the report notes that many of these are placed on poor sites. At the Crescent Dunes solar facility in Tonopah, Nevada, hundreds of birds were killed in just the first 18 months. “That's just the number of dead birds biologists have seen,” E&E News reported in 2016. But site selection also includes making compromises about how far a construction crew must travel to make onsite repairs, or even how to find a qualified workforce to work on the project in the first place.

The bottom line? CSP contractors and operators are doing their best, but the technology isn’t uniform or understood enough for the approach these builders have been taking. “The very nature of fixed-price, fixed-schedule, full-wraparound performance-guarantee EPC contracts has likely been a main reason for issues experienced at existing CSP plants,” the report concludes.
Bottom line seems like after forty years, the plants are still not commercially viable, they still cannot design them well, and they still have persistent operational challenges.

To be fair; even among the best engineering and project management firms, only 20% of projects are on-time, on-budget and deliver the performance results forecast. About half of projects get killed off in the early stages. But even when over budget and late, some of the balance of 30% still make money, just not as much as anticipated.

So just how bad is the problem? What are the numbers? How late are the projects? How much more do they cost than anticipated? How much higher are operating expenses than anticipated? How much lower is the generating capacity than originally planned? These are normal project questions.

The answers to which are not in the Popular Mechanics article. OK. It is a general interest magazine. Maybe I need to go to the original report. I click on the report link only to discover that the Popular Mechanic article is a retelling of a Scientific American article. Press release journalism by proxy. Not common, but I have seen it before. Someone releases a "study" in a press release. A media company writes a popular version which restates the key findings in 5th Grade English level. And then a second media company rewrites the fifth-grade version into a fourth grade version.

The Scientific American also does not have the fundamental information I am seeking. I find the research link in the Scientific American article and click through to Concentrating Solar Power Best Practices Study by Mark Mehos, et al from the National Renewable Energy Laboratory, part of the Department of Energy. OK, we should find some real empirical answers here.

Or not.
The SolarPACES concentrating solar power (CSP) project database1 was used to identify the current CSP projects that are in commercial operation around the world. As of the end of 2018, 94 commercial CSP trough and tower projects had achieved commercial online operation, with all but 4 still in operation active (76 operating parabolic trough plants and 14 operating tower projects). For this study, we received input from participants representing more than 80% of these projects.

It is important to note that the survey process was more qualitative than quantitative, largely due to concerns of confidentiality of information. Participants were invited to respond to a series of general questions and allowed to focus on the topics of most interest. We acknowledge that the results are biased by the topics of interest of the participants and the project team. However, we did receive quantitative results from several participants indicating where the shortfalls in performance occurred in plants. The results were very consistent with the findings in this report.
So a study of an industrial sector which has been in place for forty years but focusing on qualitative issues rather than quantitative measures.

The first rule of science and commerce is "If you can't measure a problem, you can't fix it." Which really means you can't deliberately fix it. You might get lucky and fix the problem without knowing how or why.

I have done my share of structured cross-company and cross-industry benchmarking studies in my time. I am not mocking or even denigrating their efforts. Much useful information can arise from less structured, less empirical discussions in terms of qualitative learnings.

But it is kind of frustrating on two fronts - Two articles which purport to report on a known problem but have nothing to report because there are no empirical measures. A second because we are forty of fifty years in and we are still treating this as an emerging sector. The reality is that it is an unviable sector and we keep nursing it along owing to lack of accountability or oversight. This looks simply like governmental inertia compounded by irresponsibility. Anything which attracts tens and hundreds of millions of dollars in subsidies really needs some sort of forecast of viability and value.

And the media in this case simply took a press release report, regurgitated a summary and never even looked at the content or implications. It was mindless. No wonder they so fear AI.

Speaking Truth to Ideological Power - Satirical Style

Some of the most important factual reporting is now coming from satirical accounts.



Best of the Bee

Social Justice and Critical Theory Easy Explainer.



Tuesday, June 23, 2020

That individuals with higher cognitive ability are more appreciative of the free flow of divergent ideas by groups at various places on the ideological spectrum

Of particular note. From Freedom of Speech: A Right for Everybody, or Only for Like-Minded People? by Jonas De Keersmaecker.
Freedom of speech is often considered key to a well-functional democracy. In many countries, freedom of speech is considered a more important democratic value than regular elections. But do people genuinely believe in the virtues of open debates by supporting freedom of speech for every social group? Or do they support free speech only for their own groups? In a recently published paper in Social Psychological and Personality Science, we aimed to answer these questions, and we sought to explore whether higher cognitive ability was associated with more principled positions on free speech. We expected that people with higher cognitive abilities would be more inclined to embrace the open exchange of ideas, wherein viewpoints can be scrutinized and challenged in order to foster informed decision making and knowledge. Therefore, it was hypothesized that cognitive ability is related to more generalized freedom of speech support for all social groups across the ideological spectrum.
He then describes and summarizes the three analytical tests done on the data.
The series of studies suggest that cognitive ability is related to support for freedom of speech for groups across the ideological spectrum. These results do not mean that people with higher cognitive abilities are free speech absolutists. Indeed, although cognitive ability was reliably related to relatively stronger free speech support for each social group, groups that preach hate and violence (e.g., racists, anti-American Muslim clergymen) received rather limited freedom of speech support in absolute terms. The results do suggest, however, that individuals with higher cognitive ability are more appreciative of the free flow of divergent ideas by groups at various places on the ideological spectrum. Indeed, even when these groups voice ideas that they don’t like.
The natural question from these findings follows.

What does it say when the centers most associated with deplatforming, cancel culture, speech controls, and resistance to alternative ideas are our universities? With our most elite universities manifesting the greatest aversion?

What does it say about the cognitive ability of the administrative leaders of those institutions? Might this be a modern rewriting of the WWI adage? That the cognitive lions (faculty) are led by cognitive donkeys (the administration)? It seems logical. You don't want to be a science denier do you?

And similarly, what can we know of the cognitive abilities of the leadership of the social justice, intersectionality, and critical theory movements which are also known for their deplatforming, cancel culture, speech controls, and resistance to alternative ideas?

If we want to clean-up our cognitive playing field, these studies suggest where we might need to begin.

Or maybe I am reading too much in to the findings.

Data Talks



Off Beat Humor


I see wonderful things



The Skater, 2017 by Leonard Koscianski

The Skater, 2017 by Leonard Koscianski

Click to enlarge.

Then and now

I continue to wrestle with the relative societal seizing/hysteria over Covid-19 versus similar new viruses in 1968 and in the late fifties and earlier flus, most notably Spanish Flu in 1918-20.

I have posted earlier that I went back to a number of general histories on topics in the immediate post war era and the 1920s where one might expect at least some passing mention of the enormous toll taken by the Spanish Flu (estimated at some 500,000 Americans) and how there was virtually no mention.

It occurred to me this past week that my grandfather and grandmother had an accelerated wedding in 1918 as he prepared to head off to France to fight. She was 20 years old and she and my grandfather would have been in the generational cohort most directly affected; the Spanish Flu being noted for its mortality among adults 18-30.

I called my mother to see if she had ever heard her mother (my grandmother) relate any stories about the Spanish Flu, deaths of friends or families, the presumed panic. Her answer? "No, she never mentioned it."

So we still have this study in contrasts. A well intentioned but panicky closing down of the economy in order to prevent the overload of the healthcare system. A virus with a death rate that appears to be high by annual averages but within the norm of what we see every decade or so. A virus which takes mostly victims in the last weeks of their lives when they are dying of other conditions rather than those in the prime of their lives.

Why are we so panicked when comparable viruses in the fifties and sixties were numerically similar in deaths but not remarked upon at the time and no special actions taken?

Is this truly some sort of sick side effect of an hysterically partisan conflict? Simply a product of the desperation of a dying media culture ravenous for clicks?

While plausible, those still seem to me to be exaggerated explanations. But I have no adequate alternatives.

The only thing I can come up with is that perhaps in 1918, not only were we inured to the sudden death reports of World War I, but possibly the magnitude of the death rates were not so visible because we did not have centralized national reporting in the fashion we do today. NBC, ABC, CBS, WaPo, NYT and others all run with the same national stories out of Washington, D.C. and New York.

Perhaps in 1918, relying on local news, we missed the larger context. We got the death lists locally but simply could not form a national picture and therefore form a broader sense of magnitude. This would make sense at a time when we were also inured to periodic local excess deaths from typhoid, cholera, yellow fever, etc. We were accustomed to periodic and somewhat frequent spikes in deaths but which were local. And we did not have the national perspective to put the pieces together when everywhere suffered spikes simultaneously.

It is a somewhat logical argument but still seems inadequate to me.

Perhaps we were simply psychologically more robust then, accustomed to shorter and more precarious lives, and none of what seems in hindsight to have been exceptional, appeared to them at the time out of the ordinary.

I don't know but it feels odd. Which inclines me to think that the issue is not how they responded then but how we are responding now.

We are better now at credentializing than at educating

From The Evolution of the US Family Income‐Schooling Relationship and Educational Selectivity by Christian Belzil and Jörgen Hansen. From the Abstract:
We estimate a dynamic model of schooling on two cohorts of the NLSY and find that, contrary to conventional wisdom, the effects of real (as opposed to relative) family income on education have practically vanished between the early 1980's and the early 2000's. After conditioning on a cognitive ability measure (AFQT), family background variables and unobserved heterogeneity (allowed to be correlated with observed characteristics), income effects vary substantially with age and have lost between 30% and 80% of their importance on age‐specific grade progression probabilities. After conditioning on observed and unobserved characteristics, a $300,000 differential in family income generated more than 2 years of education in the early 1980's, but only one year in the early 2000's. Put differently, a $70,000 differential raised college participation by 10 percentage points in the early 1980's. In the early 2000's, a $330,000 income differential had the same impact. The effects of AFQT scores have lost about 50% of their magnitude but did not vanish. Over the same period, the relative importance of unobserved heterogeneity has expanded significantly, thereby pointing toward the emergence of a new form of educational selectivity reserving an increasing role to non‐cognitive abilities and/or preferences and a lesser role to cognitive ability and family income.
Hard to interpret this study just from the abstract.

I think it is saying, in part, that compared to forty years ago, education attainment outcomes are materially more weakly associated with family income. How much a household earns is less predictive today of education attainment.

If that is so, it could be a quite positive outcome, indicating that we are getting better at finding cognitive talent wherever it lurks and getting it into the education system in a fashion which allows its fruition.

But the last couple of sentences seem to indicate that there is greater education attainment simply because we are demanding cognitively less from students. Which defeats the purpose and is merely an act of credentialization rather than educating.