No matter how sophisticated technologies become and how much mankind evolves, there is little—if anything—we can do to prevent natural disasters from occurring. What we can do, however, is implement technologies that help streamline the way we respond to such disasters.
And that’s where the World Bank Global Facility for Disaster Reduction and Recovery (GFDRR) comes into the equation. The organization educates governments and communities on how to respond most efficiently and effectively to natural disasters. One aspect of that management is Code for Resilience, an initiative run by the GFDRR that leverages the power of open source, bringing risk management decision makers and software developers together to work collaboratively on solving disaster-related issues.
For Dr. Alanna Simpson, senior disaster risk management specialists at GFDRR, such collaboration is one of the biggest perks of the open source and open data movements: bringing together two parties that might not interact with one another otherwise. There are proprietary tools governments can leverage to help reduce the risks associated with disasters, but those tools are often expensive, meaning many governments don’t have the funds to deploy them, particularly in today’s challenging economy.
That’s what makes open source so attractive.
“Open source software and the availability of open data really lower the barrier for everyone to participate,” Simpson said. After all, the technology can be extremely cost-effective, with governments around the globe realizing substantial cost savings by choosing to deploy open source solutions.
Some of the best approaches to disaster risk mitigation, Simpson said, combine top-down and bottom-up approaches to data collection. Indonesia has served as an example of this philosophy in action, taking a community-based approach to disaster risk management. Since 2011, the project has mapped more than 1 million buildings across the country using open source tools they developed for that particular cause.
To date, GFDRR has empowered 40 million people in 24 countries to access information related to natural risk hazards. The group hopes that governments will be better equipped to respond to natural disasters when they do unfortunately occur in the future.
Recent vulnerabilities like Heartbleed served as a reminder of the importance of maintaining the integrity of networks and code so that systems and intellectual property remain protected at all times.
At Protecode, we understand that fully, which is why we’ve engineered our Global IP Signatures database to constantly cross-reference the National Vulnerability Database (NVD), a government repository that contains a database of security checklists, security-related software flaws and more. In doing so, our customers benefit from real-time analysis of third-party open source code contained in their projects.
That analysis generates a comprehensive report on all security vulnerabilities that may exist in their projects. Such insight allows customers to move forward with these projects knowing that the integrity of their code remains fully intact.
If our tools do in fact identify security vulnerabilities in existing open source code, our customers receive color-coded identification of those flaws indicating their severity. On top of that, the reporting tool highlights the components of the code that are flawed and also provides a description of what the glitches entail. Customers are also directed to the appropriate place on the NVD’s website where they can find additional information if they so choose.
As we saw with Heartbleed, new serious security flaws can be discovered at any time. Thanks to our analysis, our customers will know about these new glitches and bugs as soon as they are added to the NVD . Customers can also choose to track security vulnerabilities against certain packages.
Protecting the integrity of your code and intellectual property is something we don’t take lightly. And we don’t expect you to figure out how to do it on your own. Open source security vulnerability reporting tools can do the hard work for you so that you can focus on other mission-critical areas of your business.
Open source of course! Get the scoop in this week’s collection of open source news…
Linux to go …
A new distribution of Linux, specifically Automotive Grade Linux (AGL), could soon be fuelling a new generation of open source powered cars. What this means is that a) future Herbies will be cyber-talking to each other, and b) because of open source, future souped-up cars will be created (and driven) by hackers. Read more at Tech Republic then find out about the Automotive Grade Linux Working Group.
Crowdfunding: who’s laughing now?
Ten years ago when David Rappo first came up with the idea for a crowdfunding site solely dedicated to financing open source software projects, people laughed. Since then both open source software and crowdfunding have become mainstream, so Rappo has re-launched Bountysource, an (open source) platform for getting open source projects off the ground. Read more here.
Short term pain could mean long term gain for the NHS
Up until now, the UK’s Nation Health Service (NHS) has been weary of switching from a proprietary to open source operating (pun intended) system. And with good reason, since patient health records are sensitive information. But the discontinuation of support for their current XP operating system has renewed calls for a move to Linux. And while the initial switch won’t be without its headaches, the long term cost savings could be just what the doctor ordered. Read more at The Conversation.
DNA under Apache 2.0?
Could DNA be open sourced, so you (assuming you are a garden-variety DNA scientist) could download it, modify it and create a whole new creature? John Schloendorn CEO of medical start-up Gene And Cell Technologies, is proposing exactly that. He wants to take expensive (and restrictively licensed) proteins and make them open source. Scientists could then use these proteins to synthesis DNA. Interesting proposition covered in Radar (garden-variety DNA scientists go here).
if (dual_licensed) then open_source = $$
Patrick McFadin, chief evangelist for Apache Cassandra recently explained the ups and downs organizations face when deciding to open sourcing their products. He advocates staying away from a services model and sticking with dual open source and commercial license model for proprietary add-ons. He also points out that licensing is a major consideration - since restrictive licenses like GPL can both hamper and drive commercial growth depending on how they are applied. Read the full story at opensource.com.
Some friendly advice on managing vulnerabilities
Excuse us for tooting our own horn, but we think you may find the advice recently published in Law360 useful. Peruse our tips for managing open source security vulnerabilities here.
After being uncovered earlier this year, Heartbleed—the serious security vulnerability in OpenSSL that affected vast expanses of the Internet—was blamed on the open source community by some pundits. But simultaneously, many credited that same community for discovering the flaw in OpenSSL, which may otherwise have been missed, through its code review.
Either way, the confusion surrounding Heartbleed has led to programmers creating their own iterations of OpenSSL, presumably in hopes that such a flaw won’t happen again. Last month, Google became the latest company to announce its interpretation of OpenSSL—BoringSSL—a name the company says is “aspirational and not yet a promise.” In other words, Google hopes BoringSSL doesn’t cause the stir that OpenSSL did.
Earlier this year, other developers leveraged OpenSSL into LibReSSL because they felt that the former pervasive standard for encrypting data sent to and from websites was “not developed by a responsible team.” At the same time, the Linux Foundation doubled down on OpenSSL via its Core Infrastructure Initiative.
Google did say that it was not intending for BoringSSL to replace OpenSSL. Instead, the company will continue sharing code with OpenSSL to help patch bugs and other vulnerabilities.
But what does this all mean for the open source community? OpenSSL was previously the go-to solution for encrypting communication between websites and individuals. Now, the consensus around the open source toolkit seems to have disappeared. Instead of OpenSSL evolving as the primary technology, at least three projects will progress separately.
Will one emerge as the de facto Web traffic encryption toolkit? Or will something new come down the pike? One way or another, open source programmers will keep writing code and working to create even stronger solutions.
Besides their common daily handling of significant amounts of money, the New York Stock Exchange, New York Mercantile Exchange and NASDAQ have something else in common: All three exchanges now rely on Linux.
In the world of finance, milliseconds matter. There is significant money at stake when one firm is able to make a trade a split second before another firm. High-frequency trading refers to using sophisticated technologies to facilitate the fastest trades possible. Because Linux is known for its low transaction and networking latency, financiers are increasingly relying on the open source operating system to help accelerate the speed with which they trade.
Jim Zemlin, the executive director of the Linux Foundation, recently spoke at the Linux Enterprise End-User Summit, addressing several hundred Wall Street executives as well as Linux developers about what he predicts for the future of technology.
Open source will lead the way.
“Hardware functions are increasingly being abstracted into software,” Zemlin explained. “More and more specialist hardware has been replaced by open source software running on generic x86 boxes.”
On the software side of the coin, open source is leading the way, Zemlin continued, due to the fact that companies are able to develop products faster thanks to code sharing. This results in high-quality products that cost less to produce.
According to Gartner, when it comes to today’s software, 80 percent of the code used is open source, while companies tweak the final 20 percent to give their programs their own personalities. Because of this, “People now have full-time jobs managing their external open source resources,” Zemlin said.
He expects the trend of open source adoption to be even more pervasive in the future.
Hanging chads and dimpled chads—what?
For anyone who needs a brief history refresher, the United States presidential election of 2000 was an interesting one to say the least: the race to determine whether George W. Bush or Al Gore won Florida’s 25 electoral votes—and thus the presidency—came down to 300 votes.
That margin was enough for a mandatory recount, and over the next few weeks the world watched as volunteers in The Sunshine State tried to determine voter intent on some ballots with holes that weren’t completely punched. Whether ballots that were partially punched (hanging chads) or punched lightly but not enough to break the paper (dimpled chads) became a big subject of debate in determining who would be the country’s 43rd president. (We all know how that turned out.)
The recount situation was a bit chaotic to say the least. To prevent such circumstances from occurring in the future, the Open Source Elections Technology Foundation (OSET) seeks to develop open source software that’s necessary to run an election. Members of the foundation envision creating a solution that facilitates smooth elections while also providing the added bonus of cash savings to governments at all levels.
Traditional voting machines can have serious glitches. A voter could try to support one candidate but the machine might record that person’s vote for the other candidate. There can also be misconfigured ballots or broken machines. It’s 2014, so these kinds of problems seem almost anachronistic due to the pervasiveness of technology in our lives.
OSET believes that its open source solutions will encourage more formidable elections, as more companies will be encouraged to jump into the market since they’ll have a certified foundation upon which to build.
“Two vendors control 80 percent of America’s infrastructure,” explains Greg Miller, OSET chairman, and that results in “no incentive to innovate.” Investors seem to think OSET might be on to something, as the foundation expects to raise $6 million this year.
Governments around the world have been reaping the benefits of open source software for years. And this is yet another example of open-sourcing the political process.
Earlier this month, Tesla Motors—a darling of many Wall Street investors over the past year or so—announced that it would be releasing all of its electric car patents to the public.
According to CEO Elon Musk, Tesla originally was worried that some of the country’s larger manufacturers would copy the Palo Alto, California-based company’s technology and then put it out of business thanks to the massive manufacturing infrastructure those industry juggernauts already have in place. But that has not been the case, according to Musk. And since Tesla’s goal is to combat climate change, it’s counterintuitive to safeguard its technology.
“If we clear a path to the creation of compelling electric vehicles, but then lay intellectual property landmines behind us to inhibit others, we are acting in a manner contrary to the goal,” Musk wrote on Tesla’s blog. “Tesla will not initiate patent lawsuits against any who, in good faith, wants to use our technology.”
To date, the manufacturer of premium electric cars has had more than 2,400 patents awarded to it, giving it a stranglehold on a small market. Why would a company want to potentially loosen its grip on such a market?
For starters, Musk has already claimed that the electric car market hasn’t developed as fast as he had envisioned. By offering up Tesla’s technology as open source, Musk might very well be hoping to attract more investors into the green car sector. What’s more, should startups come along and leverage Tesla’s technology, the company can claim that its technology is the standard in electric cars possibly discouraging someone else from developing solutions that rival or exceed Tesla’s.
In any case, the move is likely to spur innovation, as potential electric car manufacturers no longer have to spend time developing technology of their own but can instead use Tesla’s.
Drawn to the allure of reduced costs, better performance and increased control, open source solutions have been adopted by institutions of higher learning for the past 15 years, give or take. As the technology’s popularity has increased, so too has the number of open-source-based projects. And that increase in projects has resulted in a more apparent need for more effective licensing.
Open source licensing generally requires both inbound and outbound agreements. When it comes to outbound licenses, programmers can adopt BSD-style licenses that allow them to charge for software or give it away for free. Or they can opt for GPL-style licenses that allow anyone to copy or modify software, so long as that person releases his or her modification to the open source community.
On the other side of the coin there are inbound licenses, which are “far less well-known but no less critical,” according to Ian Dolphin, director of the Apereo Foundation. Under such licenses, programmers agree that their contributed code is original and that it can forever be used for free by anyone.
In a proprietary environment, it can be difficult to make agreements surrounding intellectual property rights when working together on a project, if those agreements are ever reached at all. But in an educational environment, where a nonprofit group manages a strong and transparent inbound and outbound licensing system, those arguments can subside rather quickly. This resolution proverbially paves a smooth road that allows collaboration to occur with less friction.
It doesn’t seem that there needs to be a nonprofit entity for every individual open source project generated at educational institutions, but it does appear that such groups help can protect intellectual property in a communal way. In doing so, projects progress faster, and the open source community—as well as those outside it—benefit tremendously from technological innovation.
When two people are analyzing a problem or working on a project together, it’s likely they will have different ideas and interpretations. When this occurs, especially in a revenue-producing environment, oftentimes collaboration tools can help produce the best results.
Placing a strong emphasis on collaboration, many businesses are turning toward open source solutions. That’s because open source code is developed by a community of enthusiastic programmers who edit each other’s code in a process called continuous integration. In this process, the coders put their work in a shared directory several times a day, allowing their colleagues to go over it with a fine-toothed comb to detect any problems in the code that could possibly cause serious harm—like when Heartbleed was discovered (although in theory, such a flaw should have been discovered long before it was).
Such a system allows businesses to bypass inefficiencies and, in turn, accelerate the speed with which innovation occurs in-house. With open source, businesses become unfragmented as distinct departmental silos are dissolved and decision makers get an unobstructed view of all aspects of the manufacturing process.
Rather than having one programmer—or even a team of programmers at one company—work on technology, open source allows for collaboration across multiple businesses and verticals. The result? Higher-quality software that is more secure and adaptable.
From an economic perspective, it’s relatively inexpensive to adopt open source solutions and the accompanying philosophy. On the other hand, it’s costly not to. By choosing to give open source a try, companies will quickly realize the benefits that come along with putting collaboration at the heart of an organization.
Healthcare providers worldwide are beginning to shed their pen-and-paper means of transcription and transition to digital systems Electronic Medical Records (EMRs). In the United Kingdom, one high-ranking National Health Service member is taking the opportunity to encourage healthcare decision makers to consider looking toward open source solutions when making the migration.
According to Richard Jefferson, the technology offers healthcare providers “the biggest bang for the buck.” Generally speaking, organizations that choose open source solutions are empowered with technology that is comparable to proprietary solutions, if not better. And at the same time, they don’t have to shell out tons of money for software licenses.
“If you don’t mind the fact that you’re paying £50 a year for some commodity software, it’s fine,” Jefferson said at the e-Health Insider CCIO open source conference. “But why put off using it in a clinical setting where you can save hundreds of thousands a year?”
In addition to the cost savings afforded by open source solutions, healthcare providers that leverage open source solutions are free from vendor lock-in, meaning that if their technology needs change in the future, they have infrastructure that’s flexible enough to adapt.
Open source code is developed within a community of dedicated and enthusiastic coders who review one another’s work and build off of it, strengthening technology along the way. With this in mind, healthcare providers that leverage managed open source solutions will also benefit from increased innovation and technology that is not static, but improves over time.
As healthcare providers begin digitalizing their expansive medical records, it’s important to consider doing so in a way that future-proofs their organizations from a technological facelift in the near future. One solid way to do that is by deploying open source solutions.