…are all covered in this week’s compendium of open source news!
Death of OSS licensing dead or in its prime?
Recently, technology writer Matt Asay wrote an article in InfoWorld heralding the death of open source licensing. OSI president Simon Phipps fired back by declaring open source licensing more important than ever. Phipps sates that using a (preferably OSI approved) licensed project is especially important in cases of distributed and commercial development. Read the rest of Phipps’ argument at InfoWorld, or read our thoughts on the subject here.
Open source Darwinism pays off
Evolution works by selecting the strongest species to survive while others perish. The same can be said for open source — for every successful open source project, there are thousands that have failed. So what’s the next evolutionary step after your open source project has gained dominance? Convert to a dual-license model, (if you goal is to make money from the project). Read about the ups and downs of some dual-licensed projects at the New York Times.
Open source toddler
If you find the idea of raising a human child a little to challenging, you could try your hand at a robotic one. Researchers from the France-based Inria Flowers Lab have released a 3D printable humanoid robot named Poppy. The group released everything you need to build the primitive (toddler-like) robot including CAD files and the control software under a Creative Commons license. So, if you’ve got a few days (and around $12k) to spare you can find everything you need to get started on your own Poppy farm here, or read more at Design Engineering.
The open source machine-learning platform PredictionIO has just raised $2.5 million in funding, which will help bring the platform to the wider open source community. The company hopes to give organizations of all sizes access to automated data interpretation and prediction platforms, which have traditionally been reserved to those who can either a) afford expensive closed source options or b) take the time to develop their own machine-learning code in house. Couple that with those Poppy toddlers in previous story, and wow! Read more at The VAR Guy.
Hacking the browser
Breech, a new open source browser launched earlier this month, is completely customizable – so customizable in fact that when you launch the browser it has no functionality. Unlike other browsers that allow the development third party plugins for extra functionality, Breech is customizable right down to the navigation and display. This could bring some innovative new ideas to increasingly stale browsers. Read more here or start hacking here.
Open source standards released in the UK
As part of its plan to migrate towards open source software, the UK government has announced that PDF/A or HTML for viewing government documents, and Open Document Format (ODF) for sharing or collaborating on government documents are now standard. By moving towards open source the UK hopes to spur innovation and, of course, save money. Read more at Public Technology.
For our German readers…
We recently had an article on package pre-approval published (in German) in Elektroniknet. We also have a webinar on managing open source security vulnerabilities (also in German) coming up. You can register here.
Taking advantage of open source software and hardware, Samsung recently announced a plan to help entrepreneurs craft wearable technology that will revolutionize the health care industry, according to Samsung officials.
The Samsung Digital Health Challenge will be funded by $50 million, and the company hopes programmers will help create innovative, non-invasive technology that will improve the delivery of health care. Moreover, officials hope developers will build data collection sensors and algorithms that collect health tracking data that can be leveraged to provide better care.
To do that, Samsung has released both open source software and hardware to encourage the open source community to help meet the challenge’s goals:
- From the hardware perspective, the Simband is a band that is worn on the wrist, which allows programmers to track whichever health metrics they want. The band also boasts the functionality of letting additional hardware be integrated into it, and Samsung has already said it hopes partners will augment the technology in the future so it can be worn elsewhere.
- From the software perspective, the Samsung Architecture for Multimodal Interactions (SAMI) is a cloud-based platform that enables programmers to analyze the various data that the sensors collect. Because the platform is open source, developers will be able to access the data that is generated by their own projects, while also being able to access the data from other projects as well. The company hopes that in the future, new algorithms will be able to be generated from the expansive health data collected by the program.
The University of Callifornia-San Francisco (UCSF) will be partnering with Samsung to test the technologies that emerge from the project.
“Our bodies have always had something to say but now, with advanced sensors, algorithms and software, we will finally be able to tune into what the body is telling us,” explained Dr. Michael Blum, associate vice chancellor of infomatics at USCF. “Validation of these technologies will improve the quality of data collected and help advance the ability to bring new products to market quickly.”
The fact that Samsung launched the initiative is perhaps a sign the company realizes it lacks the expertise to manufacture transformative wearable technology on its own. But by leveraging the open source community and investing in it, the company is likely to find some formidable partners.
No matter how sophisticated technologies become and how much mankind evolves, there is little—if anything—we can do to prevent natural disasters from occurring. What we can do, however, is implement technologies that help streamline the way we respond to such disasters.
And that’s where the World Bank Global Facility for Disaster Reduction and Recovery (GFDRR) comes into the equation. The organization educates governments and communities on how to respond most efficiently and effectively to natural disasters. One aspect of that management is Code for Resilience, an initiative run by the GFDRR that leverages the power of open source, bringing risk management decision makers and software developers together to work collaboratively on solving disaster-related issues.
For Dr. Alanna Simpson, senior disaster risk management specialists at GFDRR, such collaboration is one of the biggest perks of the open source and open data movements: bringing together two parties that might not interact with one another otherwise. There are proprietary tools governments can leverage to help reduce the risks associated with disasters, but those tools are often expensive, meaning many governments don’t have the funds to deploy them, particularly in today’s challenging economy.
That’s what makes open source so attractive.
“Open source software and the availability of open data really lower the barrier for everyone to participate,” Simpson said. After all, the technology can be extremely cost-effective, with governments around the globe realizing substantial cost savings by choosing to deploy open source solutions.
Some of the best approaches to disaster risk mitigation, Simpson said, combine top-down and bottom-up approaches to data collection. Indonesia has served as an example of this philosophy in action, taking a community-based approach to disaster risk management. Since 2011, the project has mapped more than 1 million buildings across the country using open source tools they developed for that particular cause.
To date, GFDRR has empowered 40 million people in 24 countries to access information related to natural risk hazards. The group hopes that governments will be better equipped to respond to natural disasters when they do unfortunately occur in the future.
Recent vulnerabilities like Heartbleed served as a reminder of the importance of maintaining the integrity of networks and code so that systems and intellectual property remain protected at all times.
At Protecode, we understand that fully, which is why we’ve engineered our Global IP Signatures database to constantly cross-reference the National Vulnerability Database (NVD), a government repository that contains a database of security checklists, security-related software flaws and more. In doing so, our customers benefit from real-time analysis of third-party open source code contained in their projects.
That analysis generates a comprehensive report on all security vulnerabilities that may exist in their projects. Such insight allows customers to move forward with these projects knowing that the integrity of their code remains fully intact.
If our tools do in fact identify security vulnerabilities in existing open source code, our customers receive color-coded identification of those flaws indicating their severity. On top of that, the reporting tool highlights the components of the code that are flawed and also provides a description of what the glitches entail. Customers are also directed to the appropriate place on the NVD’s website where they can find additional information if they so choose.
As we saw with Heartbleed, new serious security flaws can be discovered at any time. Thanks to our analysis, our customers will know about these new glitches and bugs as soon as they are added to the NVD . Customers can also choose to track security vulnerabilities against certain packages.
Protecting the integrity of your code and intellectual property is something we don’t take lightly. And we don’t expect you to figure out how to do it on your own. Open source security vulnerability reporting tools can do the hard work for you so that you can focus on other mission-critical areas of your business.
Open source of course! Get the scoop in this week’s collection of open source news…
Linux to go …
A new distribution of Linux, specifically Automotive Grade Linux (AGL), could soon be fuelling a new generation of open source powered cars. What this means is that a) future Herbies will be cyber-talking to each other, and b) because of open source, future souped-up cars will be created (and driven) by hackers. Read more at Tech Republic then find out about the Automotive Grade Linux Working Group.
Crowdfunding: who’s laughing now?
Ten years ago when David Rappo first came up with the idea for a crowdfunding site solely dedicated to financing open source software projects, people laughed. Since then both open source software and crowdfunding have become mainstream, so Rappo has re-launched Bountysource, an (open source) platform for getting open source projects off the ground. Read more here.
Short term pain could mean long term gain for the NHS
Up until now, the UK’s Nation Health Service (NHS) has been weary of switching from a proprietary to open source operating (pun intended) system. And with good reason, since patient health records are sensitive information. But the discontinuation of support for their current XP operating system has renewed calls for a move to Linux. And while the initial switch won’t be without its headaches, the long term cost savings could be just what the doctor ordered. Read more at The Conversation.
DNA under Apache 2.0?
Could DNA be open sourced, so you (assuming you are a garden-variety DNA scientist) could download it, modify it and create a whole new creature? John Schloendorn CEO of medical start-up Gene And Cell Technologies, is proposing exactly that. He wants to take expensive (and restrictively licensed) proteins and make them open source. Scientists could then use these proteins to synthesis DNA. Interesting proposition covered in Radar (garden-variety DNA scientists go here).
if (dual_licensed) then open_source = $$
Patrick McFadin, chief evangelist for Apache Cassandra recently explained the ups and downs organizations face when deciding to open sourcing their products. He advocates staying away from a services model and sticking with dual open source and commercial license model for proprietary add-ons. He also points out that licensing is a major consideration - since restrictive licenses like GPL can both hamper and drive commercial growth depending on how they are applied. Read the full story at opensource.com.
Some friendly advice on managing vulnerabilities
Excuse us for tooting our own horn, but we think you may find the advice recently published in Law360 useful. Peruse our tips for managing open source security vulnerabilities here.
After being uncovered earlier this year, Heartbleed—the serious security vulnerability in OpenSSL that affected vast expanses of the Internet—was blamed on the open source community by some pundits. But simultaneously, many credited that same community for discovering the flaw in OpenSSL, which may otherwise have been missed, through its code review.
Either way, the confusion surrounding Heartbleed has led to programmers creating their own iterations of OpenSSL, presumably in hopes that such a flaw won’t happen again. Last month, Google became the latest company to announce its interpretation of OpenSSL—BoringSSL—a name the company says is “aspirational and not yet a promise.” In other words, Google hopes BoringSSL doesn’t cause the stir that OpenSSL did.
Earlier this year, other developers leveraged OpenSSL into LibReSSL because they felt that the former pervasive standard for encrypting data sent to and from websites was “not developed by a responsible team.” At the same time, the Linux Foundation doubled down on OpenSSL via its Core Infrastructure Initiative.
Google did say that it was not intending for BoringSSL to replace OpenSSL. Instead, the company will continue sharing code with OpenSSL to help patch bugs and other vulnerabilities.
But what does this all mean for the open source community? OpenSSL was previously the go-to solution for encrypting communication between websites and individuals. Now, the consensus around the open source toolkit seems to have disappeared. Instead of OpenSSL evolving as the primary technology, at least three projects will progress separately.
Will one emerge as the de facto Web traffic encryption toolkit? Or will something new come down the pike? One way or another, open source programmers will keep writing code and working to create even stronger solutions.
Besides their common daily handling of significant amounts of money, the New York Stock Exchange, New York Mercantile Exchange and NASDAQ have something else in common: All three exchanges now rely on Linux.
In the world of finance, milliseconds matter. There is significant money at stake when one firm is able to make a trade a split second before another firm. High-frequency trading refers to using sophisticated technologies to facilitate the fastest trades possible. Because Linux is known for its low transaction and networking latency, financiers are increasingly relying on the open source operating system to help accelerate the speed with which they trade.
Jim Zemlin, the executive director of the Linux Foundation, recently spoke at the Linux Enterprise End-User Summit, addressing several hundred Wall Street executives as well as Linux developers about what he predicts for the future of technology.
Open source will lead the way.
“Hardware functions are increasingly being abstracted into software,” Zemlin explained. “More and more specialist hardware has been replaced by open source software running on generic x86 boxes.”
On the software side of the coin, open source is leading the way, Zemlin continued, due to the fact that companies are able to develop products faster thanks to code sharing. This results in high-quality products that cost less to produce.
According to Gartner, when it comes to today’s software, 80 percent of the code used is open source, while companies tweak the final 20 percent to give their programs their own personalities. Because of this, “People now have full-time jobs managing their external open source resources,” Zemlin said.
He expects the trend of open source adoption to be even more pervasive in the future.
Hanging chads and dimpled chads—what?
For anyone who needs a brief history refresher, the United States presidential election of 2000 was an interesting one to say the least: the race to determine whether George W. Bush or Al Gore won Florida’s 25 electoral votes—and thus the presidency—came down to 300 votes.
That margin was enough for a mandatory recount, and over the next few weeks the world watched as volunteers in The Sunshine State tried to determine voter intent on some ballots with holes that weren’t completely punched. Whether ballots that were partially punched (hanging chads) or punched lightly but not enough to break the paper (dimpled chads) became a big subject of debate in determining who would be the country’s 43rd president. (We all know how that turned out.)
The recount situation was a bit chaotic to say the least. To prevent such circumstances from occurring in the future, the Open Source Elections Technology Foundation (OSET) seeks to develop open source software that’s necessary to run an election. Members of the foundation envision creating a solution that facilitates smooth elections while also providing the added bonus of cash savings to governments at all levels.
Traditional voting machines can have serious glitches. A voter could try to support one candidate but the machine might record that person’s vote for the other candidate. There can also be misconfigured ballots or broken machines. It’s 2014, so these kinds of problems seem almost anachronistic due to the pervasiveness of technology in our lives.
OSET believes that its open source solutions will encourage more formidable elections, as more companies will be encouraged to jump into the market since they’ll have a certified foundation upon which to build.
“Two vendors control 80 percent of America’s infrastructure,” explains Greg Miller, OSET chairman, and that results in “no incentive to innovate.” Investors seem to think OSET might be on to something, as the foundation expects to raise $6 million this year.
Governments around the world have been reaping the benefits of open source software for years. And this is yet another example of open-sourcing the political process.
Earlier this month, Tesla Motors—a darling of many Wall Street investors over the past year or so—announced that it would be releasing all of its electric car patents to the public.
According to CEO Elon Musk, Tesla originally was worried that some of the country’s larger manufacturers would copy the Palo Alto, California-based company’s technology and then put it out of business thanks to the massive manufacturing infrastructure those industry juggernauts already have in place. But that has not been the case, according to Musk. And since Tesla’s goal is to combat climate change, it’s counterintuitive to safeguard its technology.
“If we clear a path to the creation of compelling electric vehicles, but then lay intellectual property landmines behind us to inhibit others, we are acting in a manner contrary to the goal,” Musk wrote on Tesla’s blog. “Tesla will not initiate patent lawsuits against any who, in good faith, wants to use our technology.”
To date, the manufacturer of premium electric cars has had more than 2,400 patents awarded to it, giving it a stranglehold on a small market. Why would a company want to potentially loosen its grip on such a market?
For starters, Musk has already claimed that the electric car market hasn’t developed as fast as he had envisioned. By offering up Tesla’s technology as open source, Musk might very well be hoping to attract more investors into the green car sector. What’s more, should startups come along and leverage Tesla’s technology, the company can claim that its technology is the standard in electric cars possibly discouraging someone else from developing solutions that rival or exceed Tesla’s.
In any case, the move is likely to spur innovation, as potential electric car manufacturers no longer have to spend time developing technology of their own but can instead use Tesla’s.
Drawn to the allure of reduced costs, better performance and increased control, open source solutions have been adopted by institutions of higher learning for the past 15 years, give or take. As the technology’s popularity has increased, so too has the number of open-source-based projects. And that increase in projects has resulted in a more apparent need for more effective licensing.
Open source licensing generally requires both inbound and outbound agreements. When it comes to outbound licenses, programmers can adopt BSD-style licenses that allow them to charge for software or give it away for free. Or they can opt for GPL-style licenses that allow anyone to copy or modify software, so long as that person releases his or her modification to the open source community.
On the other side of the coin there are inbound licenses, which are “far less well-known but no less critical,” according to Ian Dolphin, director of the Apereo Foundation. Under such licenses, programmers agree that their contributed code is original and that it can forever be used for free by anyone.
In a proprietary environment, it can be difficult to make agreements surrounding intellectual property rights when working together on a project, if those agreements are ever reached at all. But in an educational environment, where a nonprofit group manages a strong and transparent inbound and outbound licensing system, those arguments can subside rather quickly. This resolution proverbially paves a smooth road that allows collaboration to occur with less friction.
It doesn’t seem that there needs to be a nonprofit entity for every individual open source project generated at educational institutions, but it does appear that such groups help can protect intellectual property in a communal way. In doing so, projects progress faster, and the open source community—as well as those outside it—benefit tremendously from technological innovation.