Research competitiveness and productivity are complex subjects that should inform the development and oversight of R&D programs at the national, state and institutional levels. From a national policy perspective, studies of our national innovation ecosystem – of the factors that promote discovery and innovation – are important to America’s economic vitality.
Ironically, rather than advance our knowledge and discussion of these important topics, many university presidents seem more inclined to debate the shortcomings of available measures such as the rankings of U.S. News & World Report, sometimes even threatening to boycott the surveys. What is more, these same presidents defend the absence of adequate measurements of institutional performance by saying that the strength of American higher education lies in the diversity of its institutions. So why not develop a framework that characterizes institutional variety and demonstrates productivity understandably, effectively and broadly throughout the spectrum of our institutions?
Of course, it is not easy to characterize the wide range of America’s more than 3,500 colleges and universities. Even among the more limited number of research universities, institutional diversity is so broad that every approach to rank or even classify institutions has been rightly criticized. Most research rankings use only input measures, such as amount of federal funding or total expenditures for research, when funding agencies would be served better by information about outcomes -- the research performance of universities.
The Center for Measuring University Performance, founded by John Lombardi, has compiled some of the most comprehensive data on research universities. Its annual studies examine the multi-dimensional aspects of research universities and rank them in groups defined by relative performance on various measurable characteristics -- research funding, private giving, faculty awards, doctorate degrees, postdoctoral fellows and measures of undergraduate student quality.
The 2005 report of the Center and a recent column on this site by Lombardi note the upward or downward skewing of expenditure rankings by the mere presence or absence of either a medical or an engineering school, thereby acknowledging the problems of comparability among institutions. Lombardi hints at a much-needed analysis of research competitiveness/strengths and productivity, stating, “Real accountability comes when we develop specific measures to assess the performance of comparable institutions on the same measures.”
Indeed, a particularly thorny question always has been how to create meaningful comparisons between large and smaller research universities, or even between specific research programs within universities. This struggle seems to arise in part from the fundamental question that underlies the National Science Foundation rankings -- namely, should winning or expending more research dollars be the only criterion for a higher ranking? I think not. Quite simply, in the absence of output measures, the more-is-better logic is flawed. If research productivity is equal, why should a university that spends more money for research be ranked higher than one that spends less? The sizes of research budgets alone do not create equally productive outcomes. Other contributing factors need to be considered. For example, some universities have much larger licensing revenues than those with comparable research budgets, and all surveys that measure licensing revenues compared to research income show no correlation, especially when scaled.
Because there are no established frameworks to get at the various factors that are likely involved, I think a good beginning would be to characterize research competitiveness and productivity separately.
Because available R&D dollars vary widely by agency and field of research, and because universities do not have uniform research strengths, I suggest that portfolio analyses of research funding need to be performed. A given university’s research portfolio can be described, quantified and weighed against the percentage of funding available from each federal agency and, when possible, by the sub-areas of research supported by each agency. For example, the upward skewing of rankings is partially explained by the fact that 70 percent of all federal funding is directed at biomedical research. Likewise, the U.S. Department of Agriculture funds only 3 percent of federal research, but provides virtually all of such funds to land grant universities.
Analyses should focus on federal obligations for R&D, rather than total expenditures, because federal obligations are by-and-large competitively awarded and thus come closest to demonstrating competitiveness. Available data, however, present various challenges. For example, some federal funding that supports activities other than research will need to be excluded from analyses (e.g., large contracts that give universities management of support programs). Also, data are available only at the macro level of disciplines, such as engineering versus life sciences, which means that detailed distinctions between research areas will be difficult to achieve.
In addition, I submit that research competitiveness can only be demonstrated when one university's research portfolio is growing faster than those of other comparable universities, or faster than the rate at which federal funding itself is growing. I call this a “percent growth” comparison and think that, although formally equivalent, it is intuitively easier to understand than the “market share” approach used by Roger Geiger in his 1993 book, Research and Relevant Knowledge: American Research Universities Since World War II. Geiger’s 2004 book, Knowledge and Money: Research Universities and the Paradox of the Marketplace, clearly demonstrates how some universities have gained while others have lost their competitive positions in federally funded research over the years.
Ideally, if the data were available, research strengths should be examined over time at the micro level, by sub-disciplines or by areas of emphasis. For example, because growth in agency budgets has not occurred uniformly across agencies or over time, it would be instructive to note how portfolio shares change over time and how a particular university has fared in specific research areas. Battelle’s Technology Practice has used new tools for the graphical representation of research portfolios to draw some interesting conclusions about how some universities are linked to industrial clusters.
Relative growth is not enough because it begs the question of productivity to scale. Unfortunately, scaled research productivity data are scarce. Two seldom-mentioned sources are, however, available.
First, there is the 1997 book, The Rise of American Research Universities: Elites and Challengers in the Postwar Era, in which Hugh Davis Graham and Nancy Diamond offer new analyses, including comparisons scaled by faculty size. Their approach yields per-faculty productivity data of (1) research funding, (2) publications and (3) comparisons between private and public universities. Although the data are now dated and others have found difficulty with information on faculty size and faculty roles, I believe that the methodology employed by Graham and Diamond is worth revisiting, refining and building upon.
Second, there are the annual surveys from the Association of University Technology Managers that scale productivity in terms of output per million dollars of research activity. The AUTM data look at measurable outputs such as disclosures, patents, licenses and new company startups. Although some of these data are subject to analytical problems of their own, it is notable that the institutions that emerge as the most productive are not those at the top of the NSF rankings. More recently, the Milken Institute has begun using the AUTM data to probe the free market system as related to university research.
Beyond competitiveness and productivity:
The research competitiveness and productivity analyses discussed above are modest suggestions to improve upon the commonly used and all too simplistic more-is-better approach of the NSF rankings. Still, if we are actually to improve our analytical framework so as to advance the R&D policy debate, we will need to develop more sophisticated tools.
For example, in the productivity domain and in regard to determining how one piece of research interacts with another, scaled comparisons could also be generated by measuring per-faculty citations and their relationship to other publications. Here, I think that a good start could be by way of the various citation indices published by the Institute for Scientific Information and through the newer Faculty Scholarly Productivity Index. None of these indices has been, to my knowledge, related to funding data, which presents an intriguing opportunity.
An issue not yet addressed by either productivity or competitiveness measures is that of tracking intellectual property flows. How can we begin to trace the flows of ideas and new technologies generated by universities? This question might benefit from the kind of cluster analysis of citations first pioneered by ISI when it “discovered” the emergence of the new field of immunology. The patent data base would be another resource that could be brought to this task. Indeed, my colleague Gary Markovits, founder and CEO of Innovation Business Partners, has developed new processes and search tools that improve the hit-rate of patent data base searches, and he has worked with the Office of Naval Research on ways to accelerate the rate of innovation at their laboratories. Universities and other federal laboratories would do well to consider some of these approaches.
The public and Congress are now clamoring for accountability in higher education, just as they are with regard to health care, and while the college accountability discussions focus on undergraduate education, it won’t be long before they spread to research spending. No longer can we simply assert that adequate and comparable measurements are impossible, expecting the public to blindly trust that we in the academy know quality when we see it. As scholars and researchers, we can and must do better. Otherwise, the predictable result will be public distrust that fails to sustain even the current levels of federal R&D investments.
Luis M. Proenza
Luis M. Proenza is president of the University of Akron and a member of the President’s Council of Advisors on Science and Technology and the Council on Competitiveness.
A week ago today, the Motion Picture Association of America (MPAA) issued what had to be a hugely embarrassing news release acknowledging that an aggressively promoted and widely cited research report commissioned by the MPAA in 2005 significantly overstated the Internet-based peer-to-peer piracy of college students: "The 2005 study had incorrectly concluded that 44 percent of the motion picture industry’s domestic losses were attributable to piracy by college students. The 2007 study will report that number to be approximately 15 percent." The MPAA release attributes the bad data to an “isolated error,” adding that it takes the error seriously and plans to hire an independent reviewer “to validate” the numbers in a forthcoming edition of an updated report.
We should applaud the MPAA for going public with a painful press release about what some have tagged the “200 percent error.” ( Note: Here and elsewhere in this article, this percentage has been fixed from an earlier version -- our own little mathematical error.) Unfortunately, the MPAA has yet to release the actual reports that generated either the 44 percent or 15 percent claims about the role of college students in digital piracy; the public data are limited to PowerPoint graphics in PDF format on the association’s web site. Perhaps as part of its efforts to validate the numbers in the new report the MPAA will also make public the complete document, not just the summary graphics. (Academics do know something about peer review.)
We also have to admire the MPAA’s arrogance. The MPAA now asserts that college students account for 15 percent rather than 44 percent of the P2P piracy affecting the motion picture industry. However the press release says nothing -- not a word -- about the source of the other "85 percent" of the P2P piracy that affects the industry’s revenues, the activities of "civilians" who use consumer broadband services.
Consistent with past practice, the January 22nd MPAA statement continues to blast college students (and by extension campus officials) about the (now much reduced) levels of P2P piracy linked to college students: “Although college students make up three percent of the population, they are responsible for a disproportionate amount of stolen movie products in this country.” Additionally, the news release closes with a terse pledge that the MPAA “will continue to aggressively fight piracy on all fronts including working to forge alliances with other copyright organizations [and] deploying technologies that help combat piracy…”
The new (corrected) MPAA data affirm what many of us who follow this issue have said for several years: P2P piracy is primarily a consumer market issue. The enabling technology is not a campus network but the consumer broadband service provided by cable and telcom firms such as AT&T, Comcast, Earthlink, TimeWarner, and Verizon, among others. Of course you would never know this from last week’s news release.
The MPAA’s statement is also laden with errors and misrepresentations. Let’s begin with some basic facts and simple math. The MPAA’s release says that “college students make up three percent of the [U.S.] population.” In fact, “college students” ages 16-67, account for almost 6 percent of the US population. The Department of Education reports the projected number of full- and part-time college students in two and four-year degree-granting institutions for the 2007-08 academic year totals some 18 million students; the U.S. Census Bureau reports that the U.S. population as of December 2007 totals some 303,579,509 individuals. Do the math and you’ll find that 5.9 percent of the nation’s population could be classified as “college students,” a population that includes full-time undergraduate and graduate students, part-time students in undergraduate and graduate programs, commuter students in community colleges, and adults enrolled in online degree programs, among others.
But the population of college students that most concerns the MPAA are the undergraduates who live in campus dorms and who have 24/7 access to high speed campus networks: these are typically college freshmen and some sophomores in large public universities and the majority of undergraduates in small, private liberal arts colleges. The dorm residents total some two million students and account for 11 percent of the much larger population of 18 million college students, ages 16-67.
I and others continue to provide evidence that colleges have policies and impose sanctions on students who engage in illegal P2P activity using campus networks. Unfortunately, the MPAA and the Recording Association of America continue to press for costly “technology solutions” that campus IT experts have deemed both expensive and ineffective.
Now let’s turn the MPAA’s claim that college students account for a “disproportionate amount of stolen movie products.” The real metric for assessing “proportionality” should not be college students as a proportion of the total U.S. population, which includes millions of infants and the elderly who don’t go the movies or rent DVDs, but college students as a proportion of the movie-going population. Although the MPAA does not publish separate data for college students as a proportion of the U.S. movie-going audience, it does report that individuals aged 12-24 account for 28 percent of the “movie going” public. (Interestingly, the MPAA data seem to ignore all the “moviegoers” under age 12: this makes you wonder about Hollywood’s infamous accounting practices and suggests that no one under age 12 goes to the movies. But what about millions of kids under age 12 who went to see Pirates of the Caribbean, Cars, Night at the Museum, Superman Returns, Ice Age, Happy Feet, and Over the Hedge -- seven of the top 10 grossing films in 2006?)
Extrapolating from the MPAA’s public data on paid admissions (i.e., the number of purchased movie tickets) we see that individuals aged 18-24 accounted for 19 percent of the 1.332 billion movie tickets sold in 2006. Admittedly, a significant number, but not all, of the 18-24 year olds going to movies in 2006 were college students. But without condoning illegal P2P piracy, these numbers suggest that the proportion of downloading that the MPAA now attributes to college students (15 percent) may be roughly proportionate (or possibly even “under-proportionate”) to college students as a segment of the movie going public. (Perhaps the MPAA will offer up a grant for an independent study of the movie-going behaviors of college students, plus additional funds to find the millions of “missing” children under age 12 who are not included in their numbers about movie attendance.)
Then there is the news release’s closing statement about “deploying technologies that will help combat piracy,” which ignores the June 2007 Congressional testimony of both campus information technology officials and an IT industry executive that technology will never provide a comprehensive solution to stem P2P piracy.
At one time seemingly infallible in its continuing efforts to portray college students as digital pirates and campus officials as unengaged and unconcerned about digital piracy on campus networks, the MPAA now seems like the “association that can’t shoot straight,” a reference to Jimmy Breslin’s 1970 mob farce about a bungling Mafia gang. The January 22 press release is the second significant screwup for the MPAA on the P2P front in the past few months: in fall 2007, the MPAA released a software toolkit it said would help monitor illegal P2P activity on campus networks. Unfortunately, as reported by Brian Krebs of The Washington Post, the MPAA’s monitoring application posed a major risk to network security. In sum, it appears that the MPAA can’t count and also can’t do code.
But the MPAA’s press release also raises other interesting questions, some involving the backstory about the press release, some involving public policy questions now before Congress. With regard to the backstory, Inside Higher Ed's coverage of the MPAA press release reveals that members and staff of some key Congressional Committees knew about the errors in the MPAA data almost a week before the press release. Why did it take five days for the MPAA to acknowledge publicly the misleading data?
And then there are the issues involving public policy (and public posturing). Drawing on the MPAA’s widely publicized claims that college students accounted for 44 percent of the industry’s domestic loses due digital piracy, members of Congress have made public statements blasting P2P activity on college networks and by college students. Rep. Howard Berman (D-Calif.) who chairs the House Subcommittee on the Internet and Rep. Bart Gordon (D-Tenn.), chairman of the House Committee on Science and Technology each convened congressional hearings about P2P piracy on campuses in 2007. These hearings, coupled with the continuing efforts of the RIAA and the MPAA, led to Congressional mandates intended to address illegal P2P piracy as part of the College Opportunity and Affordability Act of 2007.
Provisions of the College Opportunity and Affordability Act of 2007 intended to address P2P piracy on campus include reporting requirements and an implied mandate to acquire a “technology solution” to stem P2P piracy. Both will involve significant costs for campuses: at the Congressional hearing on P2P convened by Gordon last June, Arizona State University CIO Adrian Sannier testified that his institution had spent approximately $450,000 on P2P technology deterrent software over the past six years; Mr. Sannier also described illegal P2P activity as an “arms race” that neither side will win, an assessment affirmed by other campus CIOs testifying at the June hearing.
Congressional mandates to stem P2P come at an interesting time for nation’s colleges and universities. In the wake of the recent, tragic events at Virginia Tech and at other institutions, colleges and universities have been scrambling to develop emergency notification plans and acquire notification technologies – some that are simple such as alarms and sirens and others that are complex such as notification and messaging systems that send email, text messages, and voice mail. Concurrently, given the downturn in the economy in recent months, many institutions now confront both mid-year budget recissions and impending budget cuts for the coming year. In many cases, colleges and universities had little or no money in their budgets this year for either notification systems or P2P monitoring technology.
Will college leaders receive a formal apology from the MPAA for the consequences of its “200 percent error.” Will Berman and Gordon issue new statements in the coming weeks, toning down their prior criticism and also admonishing the MPAA for providing bad data that led to ill-conceived legislation - the costly P2P reporting and enforcement mandates in the College Opportunity and Affordability Act?
And what about the source of the “other 85 percent” of the P2P piracy that affects the movie industry? Much as the RIAA and MPAA have named the campuses where they allege P2P piracy occurs, will the two associations now go public with (hopefully accurate) data about the level of P2P piracy that occurs on consumer broadband services? (Are AT&T broadband customers more likely to engage in P2P piracy than Earthlink, TimeWarner, or Verizon customers?) Much as the MPAA and RIAA leadership has criticized campus officials for not engaging on P2P issues, will the MPAA and RIAA’s leaders now take cable and telcom industry executives to task for their benign efforts to educate their customers about copyright and to address P2P activity on consumer broadband services?
Let me affirm (yet again) that the campus community does not condone digital piracy and that I am not condoning the behaviors of either college students or “civilians” who engage in digital piracy . As I stated in a November commentary published by Inside Higher Ed, "illegal P2P downloading is a messy issue. But the swiftboating efforts of the RIAA and the MPAA to portray college students as the primary source of digital piracy will not resolve this problem, in either the campus or the consumer markets. Neither will federal mandates that ultimately will mean pass-through costs for students."
Next steps? Perhaps the MPAA’s press release acknowledging its “200 percent error” will set the stage for new, less rancorous private and public discussions about P2P piracy. Colleges and universities respect copyright; colleges and universities are engaged in serious efforts to inform and educate students about the importance of copyright. And MPAA and RIAA officials, beginning with MPAA President Dan Glickman and RIAA President Cary Sherman, should acknowledge, respect and strongly support the continuing efforts of campus officials to address copyright issues, in part by ending the public posturing that portrays colleges and universities as dens of digital piracy.
President Obama promised in his inaugural address to “restore science to its rightful place” and “transform our schools and colleges and universities to meet the demands of a new age.” These were refreshing and uplifting words from a president after the long and dark night to which science and its findings had been relegated during the previous eight years.
But these words also represent no small task to the science-friendly president. A civil crisis of science illiteracy exists today in America, and Obama’s administration is now charged with undoing a generation of decline in science policy and education in the U.S.
With White House attacks on science behind us for now, science educators must take this opportunity to propose a number of specific goals to ensure and strengthen the politically unbiased use of science in education and policy making.
October 4, 1957, may not be a date that is important to most college students today, but what occurred on this day stunned many Americans at the time. When the Soviet Union launched Sputnik into orbit, all Americans immediately knew that the Soviet Union had silently crept ahead of us in the race to control space.
The American reaction to the 1957 Sputnik launch was much more than rhetoric. The following year Congress tripled the National Science Foundation (NSF) budget to $135 million, and over the next few years of the space race, NSF support reached $500 million. Congress also passed in 1958 the National Defense Education Act, providing funding and scholarships for students and educators interested in science and mathematics.
Not everyone was on board with the new scientific policies, however. During the Kennedy and Johnson administrations industries began mobilizing to defend themselves against new science-based regulations on chemicals, pollution and industrial safety, which threatened to impose large costs on them. After the 1973 Roe v. Wade decision, Christian conservatives also began to mobilize politically. Under Reagan’s leadership, the anti-intellectual, organized efforts to weaken science-based regulations and education only grew stronger.
Then, in the climate of the early 1990s Republican Congress, the Intelligent Design movement grew and flourished, acting through local and state school boards from Kansas to Dover, Pa., to undermine the teaching of evolution. With this foundation, President George W. Bush was able to declare his belief that “both sides should be given equal time” in high school science education.
We have thus seen over the last generation a disheartening trend in science education at the nucleus of our great scientific advancements. In today's education system we import our premier science students from countries like China, India and South Korea. Our secondary school students do not seem adequately trained or even interested in pursuing a rigorous undergraduate curriculum in science, engineering or mathematics. The brightest minds tend to pursue business, law or medicine.
In his inaugural speech, Obama reminded us of the rich and productive relationship between science and public policy that shaped both science education and policy in earlier generations. So far, he has supported his promises with the appointments of distinguished scientists to high-level positions in his administration and by his declaration to reverse the previous administration’s ban on federal funding of research on embryonic stem-cell lines.
Of course, restoring science to its rightful place in government will require more than promises and appointments; it will require sustained hard work.
The conservative coalition will continue to press the same anti-science agenda, constantly seeking lines of attack. And without inherently unbiased infrastructure in the use of science policy, any progress made by the Obama administration can be overturned as quickly as an executive order when next the political tides switch.
Our current crises are no less threatening than the launch of Sputnik was in 1957. Just as investment in science education and research a half-century ago met the Soviet challenge in the Cold War, so, too, can restoration of science education and research as a policy priority help us to meet the demands for cleaner energy, better health and technologically agile national defense on which our future depends.
We thus recommend some specific goals for the new administration, to strengthen the structural support for unbiased use of science in education and policy making:
Re-establish the nonpartisan Congressional Office of Technology Assessment, to evaluate science-based policy alternatives.
Provide educational institutions a generous budget from Congress to create attractive opportunities for our educators and aspiring students entering the science and engineering curriculum.
Renew the federal investment in science education to the level of the post-Sputnik years.
Ensure standards in K-12 science education in all 50 states to ensure the teaching of a fact-based curriculum without theistic considerations as central to modern biology.
Experiment with new solutions to chronic problems in our secondary schools, to invest in our next generation of young scientists.
Restore the importance of good science in the policy setting.
To safeguard the role of science in policy making, the next generation of citizens and science teachers must understand that absolute consensus rarely occurs in science and is not necessary as a basis for policy making. Only a science-literate public can see through such Orwellian discourse as the “junk science versus sound science” false dichotomy. Moreover, science education will help prepare the public for the inevitable controversies that will arise with future scientific advances, as new knowledge sometimes takes us to places where some of us do not wish to go.
The promise of embryonic stem-cell research to cure disease or, more controversially, create desirable physical characteristics, and the search for an energy future freer of carbon, with the uncertain economic implications that entails, attest to the continuing power of science to thrust new issues onto our policy agenda.
The new leadership can and must define science’s role in developing and implementing public policy, and students at all levels of education must be provided with incentives and encouraged to study science to meet, in the president’s words, “the demands of a new age.” They must learn that decision making must be analytical and fact-based in policy-making and that the consequent choices we make remain with us as part of a sometimes messy, always fascinating political process. Let the restoration of U.S. science policy and education begin so that scientific research may be again considered, as it was in our country a half-century ago, the most noble and fruitful of human activities.
Joseph Karlesky and Richard Pepino
Joseph Karlesky is Kunkel Professor of Government, Richard Pepino is director of the Public Policy Program, and James Strick is associate professor of Earth and environment, all at Franklin & Marshall College.
Our nation has a long history of creating problem-solving partnerships between government and our research and development enterprise. Indeed, greater support for innovation is an important part of President Obama's strategy for economic growth and international competitiveness.
The largest and most prolific research and development partnerships have often involved our national security, with foundations in the military. In recent decades, this kind of collaboration has grown in support of emerging fields, like alternative energy and biomedical science. But as the threats to our nation evolve, partnerships between government, academe and industry need to move beyond areas where collaboration already is strong. A deeper, broader partnership on homeland security must be one of these areas.
The United States is no longer isolated by two oceans. And a technological revolution has made societies more interconnected than anyone thought possible. At the same time, small groups of people can exploit technology to injure and kill on a much broader scale than ever before. Indeed, the creation of the Department of Homeland Security was a response to these new-generation challenges – ones made so painfully clear on September 11th, 2001.
Today, our nation is more secure than it was before DHS was founded. And in the last two years, we have made considerable progress strengthening our defenses against terrorism, and forging new partnerships at home and overseas to protect our shared systems of trade, travel, and communication. We have improved our emergency preparedness and response capabilities, and enhanced the resilience of our communities and critical infrastructure.
Despite this progress, however, we have a ways to go to thoroughly integrate our nation’s homeland security functions and capabilities. And to do that, we need the best that science can offer. Here are three areas, in particular, that stand out:
Greater Aviation Security and Awareness
The United States has the largest aviation industry in the world, processing some 2 million passengers through 450 airports every day. We know that terrorists have repeatedly sought to use airplanes as a means to take innocent lives, and we know they continue to alter their tactics.
We therefore need to both address the current threats, and also employ technology and innovation to help us leap ahead of future threats. Better explosives detection is important, but, in fact, is just one layer of security in a multi-layered system that includes multiple tactics, both seen and unseen.
The heart of the challenge is to use technology to make travel and trade as secure and smooth as possible for passengers and for cargo. Technologies therefore have to be effective, but also fast, complementary to one another, and as non-intrusive as possible. And, of course, they must support our commitment to protect the privacy, civil rights, and civil liberties of our citizens.
Our goal is to create "the airport checkpoint of tomorrow" that reduces the need for physical searches and maximizes the likelihood that we will prevent another attack on aviation. But to imagine, design, test, procure, and – eventually – deploy this, we need new kinds of managerial, operational, and engineering expertise.
The 'Big Data' Challenge
A second homeland security challenge is likely familiar to many academics: research brings in reams of data, but what is essential is the ability to glean insight, and discern patterns and trends, from a mass of information. How, for example, can we improve our ability to identify the anomalies that could point to illicit or terrorist activity from millions – billions – of data points?
To the airline passengers we screen, add the data on more than 50,000 cargo containers arriving each day through hundreds of air, land, and sea ports. And add to this sea of "Big Data" the terabytes of information pouring in to the intelligence community about threats from abroad – more data each day than the entire text holdings of the Library of Congress.
Pulling actionable intelligence from this data requires the constant evolution of our information gathering, learning, and analytic capabilities. It requires software engineers, information systems designers, and communications and data security experts working together. It requires getting this right so that we can ensure that analysts, agents, screeners, and officers anywhere in the world can get the information they need securely, and in real time.
Securing Our Cyber Networks and Critical Infrastructure
Protecting our shared cyber networks and critical infrastructure also requires strong scientific and engineering partnerships. In the past couple years, we have hardened critical facilities, such as chemical plants and transportation hubs, and greatly improved our ability to detect and respond to a large-scale cyber attack. But we know there's more to be done.
For instance, making sure the industrial control systems that run our water treatment and power plants are safe from attack. Or finding ways to ensure that the distributed nature of cyberspace becomes a contributor to the resilience of the system, not a liability. Indeed, the multiplicity of disasters that recently hit Japan – an earthquake, a tsunami, and a nuclear crisis – illustrates vividly why resilience is so very basic, and so important.
Ninety percent of Americans live in an area where there is a moderate or high risk of natural disaster. We know we can do more to make homes and buildings more secure and resilient. We can speed the commercialization of innovations in the field of nanotechnology that can help put more resilient building materials on the market. Our scientific community can play a direct role in developing security solutions in these and other areas.
How Scientists Can Serve the Public at DHS
Since I became homeland security secretary, we have taken several significant steps. We recently issued a solicitation for research through our Science and Technology Directorate that creates incentives for academe and the private sector to propose novel ideas and approaches.
We are supporting the president’s commitment to strengthen education in the STEM fields by granting nearly 100 fellowships, scholarships, and internships to students in science, technology, engineering, and math every year. We just announced a new Loaned Executive program to bring private sector expertise into our leadership ranks on 6- to 12-month rotations, and we’re launching a new Cyber Workforce Initiative to help attract and then retain the very top cyber professionals available in the country.
I believe there are many scientists and engineers interested in working on scientific issues for the public benefit who, perhaps, have never considered the idea of government service. Maybe their impression is that technical career paths in government are not as appealing as they are in academe or the private sector.
Yet it’s not unusual for a lawyer, economist, or political scientist to spend some time working on a particular policy issue at a government agency. We therefore need to do a better job at making a similarly worthwhile and workable path for top scientists to serve the public interest, and to help make our nation more secure. In essence, we need a model where there is more scientific knowledge across government, and more knowledge of government and public policy in science and engineering communities.
We have tremendous scientific resources in this country. We lead the world in scientific and technological innovation. We must, therefore, engage our best scientific talent in support of our common security. By doing so, we can build on past success, amplify our current efforts, and greatly accelerate our future progress toward a more secure and resilient America.
Janet Napolitano is U.S. Homeland Security Secretary. This essay is based on her Compton Lecture at the Massachusetts Institute of Technology, delivered on March 14, 2011. View or read the complete lecture.
Over the last 18 months, Microsoft has shifted its support for academic research to involve more universities and more kinds of studies. The shifts come at a time that Bill Gates, the company's founder, has become increasingly concerned about declines in support for key research agencies and declining student interest in computer science.
Perhaps only in the superheated atmosphere of the current conflict over the U.S. Senate's confirmation of judges could a hearing about illegal bombings and arson by animal rights groups turn into a partisan affair.