All posts by OCSEA Editor

About OCSEA Editor

The Editor works hard to bring you great content.

Everything You Know About the Stack is About to Change

I am at the OpenStack Summit here in Austin and the announcements and releases keep rolling out, illustrating that the growing OpenStack market has some real teeth, taking a bite out of the market standbys. Even so, there is still a great deal of fear, uncertainty and doubt around the viability of clouds built upon OpenStack. The real question here is if that FUD is unfounded for today’s emerging markets.

That means taking a closer look at OpenStack is a must for businesses delving further into public, private and hybrid clouds.

The OpenStack Project, which is now managed by the OpenStack Foundation, came into being back in 2010 as joint venture between NASA and RackSpace Hosting, with the goal of bringing collaborative, open sourced based software to the then emerging cloud market. Today, the OpenStack Foundation boasts that some 500 companies have joined the project and the community now collaborates around a six-month, time-based release cycle.

Openstack, which is basically an open-source software platform designed for cloud computing, has become a viable alternative to the likes of Amazon (S3, EC2), Microsoft Azure and Digital Ocean. Recent research by the 451 Group has predicted a 40% CAGR, with the OpenStack Market reaching some $3.5 billion by 2018. Enough of a market share to make all players involved take notice.

However, the big news out of the OpenStack Summit Austin 2016, comes in the form of product announcements, with more and more vendors aligning themselves with the platform.

For example, HPE has announced its HPE Helion OpenStack 3.0 platform release, which is designed to improve efficiency and ease private cloud development, all without vendor lock-in problems.

Cisco is also embracing the OpenStack movement with its Cisco MetaPod, an on-premise, preconfigured solution based on OpenStack.

Another solution out of the summit is the Avi Vantage Platform from AVI Networks, which promises to bring software-defined application services to OpenStack clouds, along with load balancing, analytics, and autoscaling. In other words, Avi is aiming to bring agility to OpenStack clouds.

Perhaps the most impressive news out of the summit comes from Dell and Red Hat, with the Dell Red Hat OpenStack Cloud Solution Version 5.0,  which incorporates an integrated, modular, co-engineered, validated core architecture, that leverages optional validated extensions to create a robust OpenStack cloud that integrates with the rest of the OpenStack community offerings.

Other vendors making major announcements at the event include F5 networks, Datera, DreamHost, FalconStor, Mirantis, Nexenta Systems, Midokura, SwiftStack, PureStorage, and many others. All of those announcements have one core element in common, and that is the OpenStack community. In other words, OpenStack is here to stay and competitors must now take the threat of the open-source cloud movement a little more seriously.




Mobile Strategy and ROI Calculation

When it comes to developing a successful mobile strategy, and building a long-lasting relationship with customers, a CMO is often faced with difficult considerations around the best way to measure success.

The process of creating an app and investing significant amounts of money acquiring users is no longer enough to remain on the ‘first screen’ of any given mobile device — which is where any organization ultimately needs to be. It’s become necessary for teams to focus their efforts on techniques and campaigns that won’t only secure installs, but will maintain loyal relationships with mobile users.

There has been ample research conducted on determining ROI on the vast amounts that businesses invest in acquisition. Now though, it’s becoming increasingly apparent that the same attention should be given to money spent post-install as well.

Features of an engagement strategy such as push notification campaigns, in-app messaging, user-experience A/B testing, are all techniques you’ll need to invest in to help deliver successful mobile relationships. Now all you need to do is demonstrate that there is a greater need for money spent here rather than elsewhere…

Image source
So, if you’re the CMO in this situation, how do you prove this effectiveness and need?

Well, after adopting some form of mobile marketing platform to handle this task, you would hope that your mobile analytics will change. You may see improvement in your engagement, retention, and ultimately, your revenue numbers. Perhaps obviously, this is the first and easiest way to consider ROI.

Once you get a grasp on it, and you begin to see these numbers change, calculating ROI is relatively easy. Think of it this way — if we grow a metric like average revenue per user (ARPU) from $5 to $10 using a marketing automation program, and we have 1 million monthly active users, then we can put $5 million per month into the credit column. If the total monthly spend on the program amounts to $100,000, then that will result in a very (very!) satisfactory 900% ROI.

Granted it won’t always be ARPU that we’re measuring, but in the vast majority of cases, there will be metrics with which we will measure mobile success, and once we add a quantifiable value to these, we’ll be able to establish decent ROI estimates.

The Campaign Level

Another, perhaps more reliable, way to measure ROI is to focus specifically on individual campaigns. Doing this will allow you to measure the effect of any changes within specific campaigns and sum them to provide a total benefit.

Assuming that you’re using a good marketing automation platform, you should get clear results from each individual campaign, against whichever metrics you choose to use, and compared to the control group in order to isolate for other variables. By combining these multiple campaigns, we have a cumulative benefit that can be used to calculate ROI on the overall spend. Of course this approach won’t necessarily enable you to take account of some benefits, such as the effect an overall improved experience can have on word-of-mouth – but it’s probably better to be conservative when calculating ROI anyway.

One thing that is vital to remember: don’t go looking for evidence of ‘good results’ after you’ve ran the campaign. Human nature being what it is, you’ll probably find some. The key is to first identify the metrics that you want to have an impact on, and the effect that you hope to have before you implement the campaign.

The Symbiotic Relationship of the OSI Model and Application Performance

It’s no secret that Application Performance Monitoring (APM) is becoming a critical competency in today’s enterprise networks. After all, so many enterprises are moving to a cloud based model that leverages tiers of service, which brings unforeseen complexity into the task of keeping things running smoothly.

Traditionally, most IT managers have thought that application performance was a direct result of the static resources available, focusing on elements such as processor performance, available ram, and perhaps the traffic saturation associated with the Local Area Network (LAN). Although monitoring those elements still remains critical for providing adequate performance, the APM game has changed, or more appropriately evolved, to something that must address much more that the status of the basic hardware that makes up a network.

That change (or evolution) has been driven by the adoption of technologies such as cloud services, hybrid cloud deployments, mobility, content delivery networks (CDNs), hosted databases and so on. Those new technologies have changed the basic dynamic of how an application is delivered to an enduser and how the endpoint interacts with the data associated with the application.

A good example of that comes in the form of a line of business application delivered by a cloud service, where a hosted application server delivers the application to an endpoint via a browser connection and the associated data is stored on a hosted database and the connectivity to the application and data is provided via the internet over a VPN. In that situation there are multiple elements that have to work in concert to provide acceptable application availability and performance, and any one of those “tiers” can have an effect on the application. What’s more, any single tier can impact any other, especially when virtualization is involved or a software defined solution (SDN, SDDC, SDS, etc.) underpins operations.

Take the above example and apply it to the real world, where an IT manager gets a trouble ticket forwarded from the help desk that simply states “user is complaining of a slow application”.  For that IT manager, the big question becomes where to start. Using the traditional approach, the starting point becomes taking a look at the hardware and network. However, that approach is all but useless in today’s world.

Today, the IT Manager must turn to an APM platform to track down a problem and getting the proper intelligence out of that platform is a critical component for successfully remediating any application performance problem. That said, the typical APM platform is little more than a measurement and reporting tool, it will assist an IT manager in solving the problem, but that IT manager must have an understanding of how the tiers of a hybrid, cloud served network delivers an application. An understanding that brings us to how the OSI model can serve as a tier template for application delivery.

If you think about the seven layers of the OS model and what each is responsible for in the realm of network communications, it becomes clear how some of those layers can be mapped to the tiers of application delivery.

The OSI Model is broken out into layers, which has specific functions. Each of those functions map directly to the movement of information across a network.


If you align the basic concept with APM, it becomes clear how a symbiotic relationship is formed between application delivery and the constructs of the OSI model. Below is a Citrix based example.


When comparing the two models, it becomes evidently clear that the OSI model is intertwined with the best practices of APM troubleshooting.

The question here becomes one of “how well do IT Managers understand the implications of APM and how the understanding the OSI Model becomes a critical competency for success.

For more information on the best practices for APM, please take a look at a webinar I participated in for eG Innovations, which can be found at

The driverless economy: what our city streets might be like in 2020

Paradoxically, here in early 2016, we are witnessing the lowest U.S. gas prices in years, but we are also moving toward a transportation era based on dramatically different economic premises, most obviously driverless vehicles. So it seems a perfect time to dig into the deep economics of cars, their impacts on city life, and what we can anticipate coming down the pike with the rise of driverless vehicles and smarter ways of living in cities once we can depend on AI-augmented transport.

Perhaps there is nothing so pedestrian as parking, a global phenomenon that we generally take for granted along with many of the other externalized costs associated with car culture. The hard fact is that the typical car spends 95 percent of its working life parked. This means that little of the value of the car is actually realized. And, according to the AAA, the average cost of owning and maintaining a mid-sized car in the U.S. in 2014 was almost $9,000, of which $1,300 per year goes just to parking! Therefore we should not be surprised that parking is a $100B industry. This despite the fact that as much as 98 percent of car trips — at least in Los Angeles — start or end with free parking, according to the California Household Travel survey.

Parking consumes a great deal of time, and according to Daniel Shoup, 16 studies from 1927 to 2001 found that 30 percent of the cars in congested downtown traffic were cruising to find parking, on average. He also notes that more recently, in 2006 and 2007, 28 percent of the drivers stopped at traffic signals in Manhattan and 45 percent in Brooklyn were searching for curbside parking.The average American takes four car trips a day, and if you figure two are commuting based, that can still translate into a half hour or more of looking for a space.

We seldom think of how much of our cities are given over to cars, but in one study it was found that 14 percent of Los Angeles is devoted to parking. Barclay’s Capital reported that we could see a 60 percent decline in the number of cars on the road, but the impact on parking could be much greater.

Obviously, the emergence of driverless vehicles suggests a great deal about the future of cities, and the impact on parking may be considerable. First of all, it’s clear that the on-demand car services like Uber and Lyft (along with car manufacturers like GM and Ford) have plans to provide driverless transportation to replace ‘drivered’ cars. That means that cars will not be parked after you get to the office, movie theater, or even grocery store.

One study suggested that a single driverless car could replace up to 12 drivered cars. Instead of being parked at some destinations, the driverless car will simply move on to the next pick up. Another consideration is that driverless cars may be folded into municipal transport plans, like trains, buses, ferries, and bicycles, and not managed like taxis or on-demand cars services, at all.

Even for those cars that are privately owned — which is likely to be a much smaller number considering how cheap Uberish services might be once the driver is out of the picture — driverless cars may be much more efficiently parked than human-managed ones, requiring dramatically less parking.

Source: Curbed Los Angeles

The frontiers of the future will be the ruins of the unsustainable.

One thing that is bound to change is the way that municipalities require a great deal of parking allocated for new housing, based on historic patterns. That is likely to change very quickly, and will immediately lead to denser and lower cost housing, even before our cityscapes are dramatically remade.

There are other very significant economic impacts that will arise from driverless cars. It’s been estimated that accidents will fall 80 percent in next 20 years as driverless cars sideline human drivers who are demonstrably terrible at driving. As a direct consequence, car insurance will plummet, with a drop from coverage of $125 billion in covered losses in the U.S. today, down to as little as $50 billion in 2040. But this is hard to predict, since we have no prior data. It could be much, much lower.

Source: The University of Texas at Austin

In the simulation above, we can get a sense of the driverless future. The white rectangles represent cars that have been ‘scheduled’ to pass through an intersection,while yellows have yet to be scheduled. But once scheduled, the cars are coordinated in their passage, so that traffic signals are not necessary, and the flow rate of the cars is much, much faster, without the need to stop once scheduled.

The economics of frictionless traffic — without traffic lights, traffic jams, and built-in delays — is another large factor in the net savings from driverless transport. Living and working in the city of 2025 will feel totally different, and not just because there is no driver turning the wheel. It will be a totally foreign landscape, with little parking, no congestion, and much more space at street level dedicated to people, and with significantly fewer cars in view at any time. Driverless won’t mean carless, but cars will no longer dominate our cities as they do today. And I can’t wait.

Blockchains: The Next Generation of the Secure Web

Perhaps BITCOIN’s greatest gift to the web is not the disruptive nature of a digital currency, but the platform used to build that distributed, worldwide, decentralized crypto-currency. BITCOIN’s platform, often referred to as a blockchain, uses an innovative approach to keep transactions secure, validate ownership and guarantee provenance.

A blockchain consists of a distributed cryptographic ledger shared amongst all nodes participating in the network, where every successfully performed transaction is recorded and shared. In other words, blockchains are proving to be a fully auditable, incorruptible database that can deny any known hack or attack.

Although the importance of the blockchain is often lost amongst the discussion of digital currency, blockchains have the potential to disrupt how the internet itself works. Simply put, there is a lot more to blockchains than just crypto-currency and monetary applications. Truth be told, a blockchain is a decentralized ledger protocol (and/or platform) that can govern both financial and non-financial types of application states.

A blockchain can be used to power decentralized business logic, which is contained in a cryptographic “element” that has intrinsic value and can only be unlocked if certain conditions are met. The business logic executes on a blockchain platform (a decentralized cloud service) once an automatic process validates that the terms and conditions set forth by participating parties are met.

Those concepts can be used to fuel the creation of P2P (peer-to-peer) or decentralized network solutions that allow virtual communities to create secure, auditable and hack proof services and applications.

Ultimately, blockchains may reign supreme in distributed, shared environments that are used by both open and closed digital communities – a concept that has been well vetted by BITCOIN as its secure methodology to handle currency.

However, that leaves one question – how does one build a blockchain and create a community that can use it?

One answer comes in the form of an open source platform that goes by the moniker of Ethereum, which is touted as a platform for decentralized applications, but in reality has become a catalyst for building blockchain based solutions.  Etherium leverages the blockchain ideology by providing both the platform and the development tools to build blockchain based community solutions, which are decentralized in nature and highly resilient, while also being incorruptible.

However, the ideology of a crypto-currency still permeates the Ethereum platform, yet it does not have to have any monetary value. In Ethereum’s case, that crypto-currency is more appropriately referred to as a cryptofuel, which Ethereum has dubbed Ether. Ether is used to power transactions, pay for computational steps and democratize the distributed platform.

Without Ether, distributed applications could fall prey to infinite loops, excessive data consumption and many other problems that could effectively destroy a decentralized application and applications are the key component of a community.

However, to use Ether as a cryptofuel to power a creative process, one must embrace the Ethereum platform, which means there has to be much more to Ethereum than a blockchain and crytpofuel. To that end, Ethereum has created a development environment called ETH/DEV, which offers IDEs, tools and resources used to build decentralized applications. Those applications can be fueled by Ether and therein lies the most important element of the blockchain.

The blockchain itself keeps track of the fuel units (Ether), and transactions can be assigned a cost of ether units, or even Ether payments, making all interactions transaction based. Ether does not need to have a particular monetary value associated with it – Ether could be based upon reputation points earned, contributions to the community, or any other activity that adds or uses some type of value measurement.

For some community projects, content or code contributions may be the key element for earning Ether, which can then be used by the person earning the Ether to “purchase” other elements from the community or escalate content or reach out to new audiences. The blockchain comes into play be creating and maintaining the ledger of who has how much Ether and how that Ether was earned, spent or transferred. In short, the applicability of Ether is limitless.

The concept of using a crypto currency like Ether brings many possibilities to light – for example, digital contracts can be secured using Ether and then recorded in perpetuity via the blockchain. What’s more, Ether can be traded for services, software and other virtual elements, creating an economy based upon distributed applications.

One thing is certain, block chain technology is here to stay and organizations such as Ethereum are on the cusp of creating new decentralized solutions that eschew traditional borders and physical entities.


Hotels are getting over the millennials and returning to basics, at last

It seems like millennia since hotel chains have been trying to tailor new ‘experiences’ that line up with carefully researched millennial leanings. Now, after many attempts to create more social shared spaces, new aesthetics to counter the old-school tastes of Boomers and Gen Xers, and other supposed innovations, we are seeing some new takes that ditch the millennial psychobabble and which really try to get at what is emerging as travelers’ real desires.

Hyatt has launched a new Centric line of hotels, which feels like a serious departure from the adjective-laden attempts to get at the psyche of business and leisure travelers, and which instead just gets out of the way. A brand video refers to guests as “Modern Explorers” and “wish-listers.” The ‘lounge-centric’ design reminds me of the Ace Hotel in NYC, where guests and locals interact in a library-inspired setting.

“We call them Modern Explorers because these are travelers who are very curious, very independent, and very time crunched,” says Kristine Rose, VP of brands, Hyatt. “They have a wish list and they really want to make the most out of all of their experiences and reasons for traveling.”

These travelers want to be in the center of the urban experience, to interact with locals: local people, local food, local attractions. The restaurant is called‘the Local Bar and Restaurant’ and will feature local dishes served up for the ‘casual foodie’.

I can attest to the attractiveness of the Centric concept to non-millennials, since I am a late Boomer, and the practicality of ‘the essentials done right’ combined with a deeply local orientation could be the definition of a cure for the experience I have regularly when traveling, even in luxury hotels.

At the other end of the spectrum, Hilton is also working away at trimming out the inessential, however in the new Tru hotels, they are cutting out business-oriented amenities like desks, and targeting the budget conscious leisure traveler.

As the company says, “Tru by Hilton is a category disrupter. It’s built on a belief that being cost conscious and having a great stay don’t have to be mutually exclusive. Tru focuses on what matters most to guests, with a hotel that is more than just a place to sleep, it’s a true travel experience.”

Hilton is running Tru as 100% franchise operation, with systems designed from the bottom up to cut operational costs, and leading to a $75-$90/night price point. This an effort to appeal to people that might otherwise turn to Airbnb for accommodations, but who’d really rather a no-frills hotel, so long as quality reaches some minimum.

So what is the deep trend? Modern travelers want no fuss, easy in-and-out hotels that meet some promise of quality at a price — at various tiers — but that appeal to their desire to explore the hotel’s locale rather than remaining cooped up private rooms or stodgy same-old-same-old eateries. A return to simplicity: a night’s stay and off you go!

Three Challenges the Hardware Industry Faces in 2016

2016 is the year many thought leaders in tech space are urging caution, expecting markets to cool drastically and urging startups to stay afloat by minimizing burn rate. Yet at the same time, the hardware industry is the fastest growing sector in the market, with investment up 30x since 2010.

At this important precipice, what does the future hold for hardware companies?

To better understand where the hardware industry opportunities are, what are perceived as the greatest challenges, and what it means to be a hardware founder today, we surveyed over 200 hardware companies and uncovered a lot of interesting information. Here are the highlights.

Hardware Companies are Working to Build Products Faster

In our report, we found on average most companies budget one to three months to build a functional prototype. Similarly, the majority of companies budget just three to six months to go from functional prototype to production.

If you’re not familiar with hardware development lifecycles, just know that this kind of schedule is incredibly fast (and ambitious) compared with what was possible just five years ago. Hardware startups are increasingly seeking to become leaner in order to get to market faster and maximize capital investment.

But while companies are working hard to be lean and build faster, the outcomes don’t always match expectations. Data shows that about four out of five VC-backed crowdfunding projects were late in 2014, and of the late projects (total cohort of 91 companies), 30 percent still hadn’t shipped in Q1 2015.

Hardware companies setting ambitious schedules to get to market faster, and that’s fantastic and important, but there are clearly still obstacles in the way preventing companies from building as fast as they’d like to.

What are these obstacles and how can we overcome them? Well, there are many, and I won’t mention them all in this post, but one of the major ones we’re focusing on at Fictiv is prototyping speed. Iterating on a physical product is inherently slower than iterating on a digital product, but if we can help companies to iterate daily vs weekly, that’s a huge step forward.

Hardware Companies Seek Access to Better Tools

One of the key factors that has contributed to massive growth in the hardware sector is an increase in the number of tools available to hardware companies for prototyping and development.

We asked companies which tools they leverage in the development of their products and saw that 91% of companies use 3D printing, 58% use Breadboards, 51% use Arduino, and much more. (Honorable mention goes out to the tried-and-true duct tape, used by 46% of survey takers!)

On the design side of things, there are a large variety of CAD programs available, but according to our results, Solidworks still reigns supreme, used by 70% of our survey takers.

While there’s been a big uptick in the number of tools available, we need to continue to teach a wider audience how to use these tools most effectively. Arduino and Adafruit, for example, are doing a fantastic job educating people on the electronics side, Dragon Innovation is teaching young companies how to work with manufacturers in China, and on our blog we’re educating engineers and designs on how to prototype on the mechanical side of things.

However, access to tools is not enough to make a successful hardware company—we need to document and decodify the knowledge around how to best use these tools and manufacture products at scale.

Raising Capital is Top of Mind

We polled companies on the greatest challenge in bringing a successful product to market and 28% said funding & resources was #1.

And they’re not alone—this feeling is being echoed by thought leaders across the venture capital space. For example, Mark Suster, partner at Upfront Ventures, cautions: “I suspect 2016 will be the year that the more heated private tech markets cool.” Similarly, Fred Wilson, co-founder of Union Square Ventures, recently projected that “Markdown Mania will hit the venture capital sector as VC firms follow Fidelity’s lead and start aggressively taking down the valuations in their portfolios.”

In response to VC’s urging caution this year, minimizing burn rate and staying lean is the mantra for hardware startups in 2016.

The good news is that hardware is still the fastest growing sector in the industry and investment has been increasing at astounding rates: Investment in hardware is up 30x since 2010 and venture capital dollars in IoT have gone from $1.8 billion in 2013 to $2.9 billion in 2014 and $3.44 billion in 2015.

To stay lean, hardware companies should consider burn rate and optimize for speed in the prototyping stage of development. Often we see cost-conscious startups skimp on up-front costs rather than considering the cost of wasted time, which ultimately comes down to burn rate (people are your biggest expense). So every time you order a 3D printed part, for example, the true cost of that part is really (part cost + (lead time x daily burn rate)).

Main Takeaways

The evidence from our State of Hardware Report points toward incredible potential for the hardware industry. More and more companies are building innovative products, we have better tools and technologies for prototyping, and the community is strong and passionate about open-source knowledge.

But we still have a ways to go before hardware development can truly to accessible to everyone. We hope this snapshot of information points the community in the right direction to understand how to make hardware universally accessible so we can continue to build better tools and resources for truly democratized hardware development.

Are Your Virtual Meetings Actually Secure?

Businesses large and small have turned to VoiP, Videoconferencing and many other IP enabled communications platforms to enhance collaboration and speed the decision making process. However, few consider the security implications of conducting meetings over internet connected devices and may be leaving themselves open to eavesdroppers at best, corporate espionage at worst.

Those technologies, which include, VoIP, Videoconferencing, hosted webinars, and IP based communications platforms have transformed the way businesses communicate; Creating a paradigm shift that has resulted in the creation of the virtual meeting place / virtual conference room. Yet, for all of the productivity created, a deep dark secret lingers in the shadows – a secret that can be summed up simply as who can eavesdrop on those virtual meetings, or intercept the data shared.

That secret culminates in to a real world threat, where the specter of corporate espionage, powered by IP based communications, can result in loss revenue and failed projects. Simply put, securing all forms of communication should a major concern for any business entity looking to share confidential data or discuss intellectual property across the unfettered packets flying across the internet.

After all, businesses spend countless dollars on firewalls, security appliances and other InfoSec technologies to protect files and prevent unauthorized access to corporate systems, yet it seems little thought is put into securing technologies that have become all too common, such as videoconferencing and hosted IP based conferencing platforms.

To be effective, IP based conferencing has to be easy to use, easy to access and flexible enough to be reconfigured on the fly. What’s more, conferencing must be able to work across several different devices, ranging from smart phones to desktop PCs to dedicated IP conference room appliances. Simply put, if the platform makes things difficult for users, those users will attempt to go another route, such as an open or “free” system, further complicating the security picture.

Therein lies the security conundrum of virtual meetings. How can IT professionals make it both easy to use and secure from data leakage?

The answer to that conundrum lies with rethinking how users engage with their meeting platforms of choice. In other words, a conferencing system has to be both easy to use and easy to secure, two elements are normally at polar opposites of the communications equation.

To that end, Video Conferencing Solutions Vendor pexip has launched Infinity, a hosted platform that combines ease of use with policy based enforcement to create secure virtual meeting rooms. The product accomplishes that by leveraging an external policy server, which allows administrators to define policies that enforce security rules based upon multiple factors, such as user identity, location, device and so forth.

Of course, establishing identity is only the first part of the security equation. Here, pexip brings to the table some additional capabilities, such as assigning a temporary PIN to a particular meeting and then delivering that PIN via an RSA token, SMS, or other methodology so that two factor authentication becomes the norm for any conference.

For example, with SMS, each time the policy server receives a meeting request, a dynamic PIN is generated (which is stored for 60 seconds), that PIN is then delivered to the meeting attendee using their assigned phone number, which the policy server can loop up in the directory. The attendee uses that pin as a part of the authentication to enter the meeting.

There is a lesson to be learned here, security ideologies must flow down to even the most basic of corporate communications.

How Biology is Inspiring the Next Generation of Cybersecurity

Your average security operations center is a very busy place. Analysts sit in rows, staring intently at computer monitors. Cybersecurity alerts tick past onscreen—an average of 10,000 each day. Somehow, the analysts must decide, in seconds, which of these are false alarms, and which might be the next Target hack. Which should be ignored, and which should send them running to the phone to wake up the CIO in the middle of the night.

It’s a difficult job.

The alerts are false alarms the vast majority of the time. Cybersecurity tools have been notoriously bad at separating the signal from the noise. That’s no surprise, since the malware used by hackers is constantly mutating and evolving, just like a living thing. The static signatures that antivirus software uses to detect them are outdated almost as soon as they are released.

The problem is that this knowledge can cause a kind of numbness—and make tech teams slow to act when cybersecurity software does uncover a real threat (a problem that may have contributed to the Target debacle).

Luckily, a few government labs are experimenting with a new approach—one that starts with taking the “living” nature of malware a little more seriously. Meet the new generation of biology-inspired cybersecurity.

Sequencing Malware DNA
The big problem with signature-based threat detection is that even tiny mutations in malware can fool it. Hackers can repackage the same code again and again with only a few small tweaks to change its signature. The process can even be automated. This makes hacking computers cheap, fast, and easy—much more so than defending them.

Margaret Lospinuso, a researcher at Johns Hopkins University’s Applied Physics Laboratory (JHUAPL), was pondering this problem a few years ago when she had a brainstorm. A computer scientist with a lifelong interest in biology, she was aware that programs for matching DNA sequences often had to ignore small discrepancies like this, too. What if she could create a kind of DNA for malware—and then train a computer to read it?

DNA maps out plans for complex proteins using only four letters. But CodeDNA uses a much longer alphabet to represent computer code. Each chunk of code is assigned a “letter” depending on its function—for example, a letter A might represent code that opens a certain type of file, while a letter B might represent code that opens a server connection. Once a suspicious computer program is translated into this type of “DNA,” Lospinuso’s software can then compare to the DNA of known malware to see if there are similarities.

It’s a “lossy technique,” says Lospinuso—some of the detail gets scrubbed out in translation. However, that loss of detail makes it easier for CodeDNA to identify similarities between different samples of code, Lospinuso says. “Up close, a stealth bomber and a jumbo jet look pretty different. But in the distance, where details are indistinct, they both just look like planes.”

The resulting technique drastically cuts down on the time analysts need to sort and categorize data. According to one commercial cybersecurity analyst, the similarities CodeDNA found in two minutes would have saved him two weeks of hard work. But the biggest advantage of CodeDNA  is that it won’t be fooled by small tweaks to existing code. Instead of simply repackaging old malware, hackers to build new versions from scratch if they want to escape detection. That makes hacking vastly more time-consuming, expensive, and difficult—exactly how it should be.

How to Build a Cyber-Protein
Lospinuso’s team built CodeDNA’s software from scratch, too; it’s different from standard DNA-matching software, even though they implement the same basic techniques. Not so with MLSTONES, a technology developed at Pacific Northwest National Laboratory (PNNL). MLSTONES is essentially a tricked-out version of pBLAST, a public-source software program for deciphering protein sequences. Proteins are constructed from combinations of 20 amino acids, giving their “alphabet” more complexity than DNA’s 4-letter one. “That’s ideal for modeling computer code,” said project lead Elena Peterson.

MLSTONES originally had nothing to do with cybersecurity. It started out as an attempt to speed up pBLAST itself using high-performance computing techniques. “Then we started to think: what if the thing we were analyzing wasn’t a protein, but something else?” Peterson said.

The MLSTONES team got a bit of encouragement early on when their algorithm successfully categorized a previously unknown virus that standard anti-virus software couldn’t identify. “When we presented [it] to US-CERT, the United States Computer Emergency Readiness Team, they confirmed it was a previously unidentified variant of a Trojan. They even let us name it,” Peterson said. “That was the tipping point for us to continue our research.”

Peterson says she is proud of how close MLSTONES remains to its bioinformatics roots. The final version of the program still uses the same database search algorithm that is at the heart of pBLAST, but strips out some chemistry and biology bias in the pBLAST software. “If the letter A means something in chemistry, it has to not mean that anymore,” Peterson says. This agnostic approach also makes MLSTONES extremely flexible, so it can be adapted to uses beyond just tracking malware. A version called LINEBACKER, for instance, applies similar techniques to identify abnormal patterns in network traffic, another key indicator of cyber threats.

A Solution to Mutant Malware
Cyberattacks are growing faster, cheaper, and more sophisticated. But all too often, the software that stops them isn’t. To secure our data and defend our networks, we need security solutions that adapt as fast as threats do, catching mutated malware that most current methods would miss. The biology-based approach of CodeDNA and MLSTONES isn’t just a step in the right direction here—it’s a huge leap. And with luck, they will soon be available to protect the networks we all rely upon..

With contribution by Nathalie Lagerfeld of Hippo Reads.

Why adopt a mobile-first development strategy?

“We think mobile first,” stated Macy’s chief financial officer Karen Hoguet, in a recent earnings call with financial analysts.

A quick glance at the US department store chain’s 2015 financial results explains why mobile technologies might be occupying minds and getting top priority there. Sales made by shoppers over mobile devices were a definite bright spot in an otherwise disappointing year for the company. Mobile revenues more than doubled, in fact, thanks to big increases in the number of shoppers using smartphones and tablets not only to browse, but also to buy.

So it’s no surprise that Macy’s hopes to maintain this trend, by continuing to improve the mobile experience it offers. In the year ahead, Hoguet explained, this ‘mobile first’ mindset will see Macy’s add new filters to search capabilities, clean up interfaces and fast-track the purchase process for mobile audiences.

Other consumer-focused organisations are thinking the same way and the phrase ‘mobile first’ has become something of a mantra for many. One of its earliest high-profile mentions came way back in 2010, in a keynote given by Eric Schmidt, the-then Google CEO (and now Alphabet executive chairman), at Mobile World Congress in Barcelona.

“We understand that the new rule is ‘mobile first’,” he told attendees. “Mobile first in everything. Mobile first in terms of applications. Mobile first in terms of the way people use things.”

The trouble is that, for in-house development teams, a mobile-first strategy still represents something of a diversion from standard practice. They’re more accustomed to developing ‘full size’ websites for PCs and laptops first, and then shrinking these down to fit the size, navigation and processing-power limitations posed by mobile devices.

The risk here is that what they end up with looks like exactly what it is: a watered-down afterthought, packing a much weaker punch than its designed-for-desktop parent.

A development team that has adopted a mobile-first strategy, by contrast, will start by developing a site for mobile that looks good and works well on small form factors, and then ‘work their way up’ to larger devices, adding extra content and functions as they go.

That approach will make more and more sense as more ‘smart’ devices come online and the desktop PC becomes an increasingly minor character in our day-to-day lives. Take wearables, for example: many CIOs believe that headsets, wrist-mounted devices and the like hold the key to providing workers with relevant, contextual information as and when they need it, whether they’re up a ladder in a warehouse or driving a delivery van.

Developing apps for these types of devices present many of the same challenges associated with smartphones and tablets: minimal screen real estate, limited processing power and the need to integrate with third-party plug-ins and back-end corporate systems. Then there’s a lack of standardised platform for wearables to consider, meaning that developers may be required to adapt their mobile app to run on numerous different devices. For many, it may be better to get that hard work out of the way at the very start of a project.

In a recent survey of over 1,000 mobile developers conducted by InMobi, only 6% of respondents said they had created apps for wearables, but 32% believe they’re likely to do so in future.

The same rules apply to a broader category of meters and gadgets that make up the Internet of Things, from meters for measuring gas flow in a utilities network, to products for ‘smart homes’, such as the Canary home-monitoring device, to virtual reality headsets, such as Samsung’s Gear VR, as worn by attendees at Facebook CEO Mark Zuckerberg’s keynote at this year’s MWC.

As the population of ‘alternative’ computing devices grows, developers will begin with a lean, mean mobile app, which functions well despite the constraints of the platform on which it runs, having made all the tough decisions about content and function upfront. Then, having exercised some discipline and restraint, they’ll get all the fun of building on top of it, to create a richer experience for desktop devices.

More importantly, they’ll be building for the devices that consumers more regularly turn to when they want to be informed, entertained or make a purchase. In the US, digital media time (or in other words, Internet usage) on mobile is now significantly higher at 51% than on desktop (42%), according to last year’s Global Internet Trends Report by Mary Meeker of Silicon Valley-based venture capital firm Kleiner Perkins Caufield & Byers (KPCB).

In other words, developers should go mobile first, because that’s what we consumers increasingly do.


Picture Credit: Farzad Nazifi