All posts by OCSEA Editor

About OCSEA Editor

The Editor works hard to bring you great content.

Dropbox proactively asking users to reset old passwords

Dropbox users that haven’t updated their passwords prior to mid-2012 will prompted to change it when they next sign in. The company made this announcement yesterday in a blog post by Patrick Heim, Head of Trust & Security for Dropbox. As Heim described their motivation,

Our security teams are always watching out for new threats to our users. As part of these ongoing efforts, we learned about an old set of Dropbox user credentials (email addresses plus hashed and salted passwords) that we believe was obtained in 2012. Our analysis suggests that the credentials relate to an incident we disclosed around that time.

Based on our threat monitoring and the way we secure passwords, we don’t believe that any accounts have been improperly accessed. Still, as one of many precautions, we’re requiring anyone who hasn’t changed their password since mid-2012 to update it the next time they sign in.


The incident he mentioned was in 2012, and involved usernames and passwords that were stored in a file, and which recently have been used to access some accounts.

Users are also being asked to set up two-factor authentication, which many avoid because of increasing login time, but which is a wise security move.

Picture this: Exploring the Internet of Visuals

As eagerly awaited as it is wide-ranging, Mary Meeker’s annual Internet Trends Report is a data goldmine for anyone who wants to better understand the current state and future direction of the Web.

Meeker, an analyst with Silicon Valley venture capital firm Kleiner Perkins Caufield & Byers, has been presenting this report for over 20 years. Her 2016 presentation ran to 213 slides, delivered in a whistle-stop 24 minutes.

Given her typical breakneck speed of delivery, Meeker had little more than three minutes to devote to one of her most interesting observations, but it’s one that has huge implications for online and mobile marketing.

In short, the Internet is becoming a far more visual medium, as consumers and businesses increasingly opt for pictures and videos over text-based communications as a means of storytelling. It’s the younger generation (between the ages of 18 to 34) that shows the most marked preference for pictures over words.

The rise of images has a lot to do with users’ increasing use of smartphones for storytelling, sharing, messaging and creative expression, Meeker said. Worldwide smartphone users grew 21% to over 2.5 billion in 2015, with Android shipments continuing to gain share over iOS: 81% to 16%, respectively. The global mobile user now has, on average, around 33 apps installed on their device, 12 of which are used daily and spends around 4 hours per day on their smartphone.

Because these devices combine cameras, comms and social media access in one handy package, it’s never been easier to take and share images – and to view and comment on images taken and shared by other people.

Every day, almost 3 billion images are shared on Snapchat, Facebook, Facebook Messenger, Instagram and WhatsApp – all but one of which (Snapchat) are owned by Facebook. That’s up from just under 2 billion in 2014.

When it comes to video, meanwhile, daily video views on Facebook rose from 1 billion to 8 billion between 3Q2014 and 3Q2015. On Snapchat, they reached 10 billion in the first quarter of 2016.

User-generated content can result in surprise wins for brands. Take, for example, ‘Chewbacca Mom’ Candace Payne’s viral video, which saw her hooting with laughter, alone in her car, over a Star Wars mask she’d just bought from department store Kohl’s.

The video was viewed over 150 million times in one day – and also sent Kohl’s mobile app straight to the top of the rankings in the iOS app store. Demand for the mask, naturally, went through the roof. It’s the kind of result that not even the most carefully planned (and costly) marketing campaign could have orchestrated.

But at the same time, brands are working hard to get in on the act, by tuning into our passion for visuals and using them to help promote their brands in new and creative ways.

Pinterest, for example, is fast emerging as a potential goldmine for brands. According to Meeker’s data, around 55% of the site’s users (who passed the 100-million mark in September 2015) say they use the visual bookmarking site to find and shop for products. In response, Pinterest is ramping up its online shopping capabilities, announcing in June 2015 that it is introducing not only shopping carts to the site, but also visual search, which will allow users to upload a photo of a product they like and see images of similar, competing products.

Snapchat filters are another emerging use of Internet-based visuals by brands. These allow users of the site to superimpose masks on photos of themselves, friends, family or public figures. This year’s Cinco de Mayo festival, for example, saw Taco Bell release a branded Snapchat filter that turns the subject’s face into a giant taco. This won the fast-food brand 224 million views.

But creative thinking like this will be key, because traditional video ads simply do not pass muster with modern audiences. In fact, said Meeker, they are largely “ineffective” at engaging audiences: 62% of users say they’re annoyed by the ‘pre-roll’ ads that precede, for example, a YouTube clip; 81% say they mute them; and 92% say they’ve considered using ad-blocking software. A picture (or video) may be worth a thousand words – but not where it fails to delight, entertain or inspire.


Why is resource management in the corporate environment still a challenge?


You know the score. You’re faced with the need to find a quiet space to speak with a colleague or client, but didn’t book anywhere. So you grab what looks to be a free meeting room. And thus begins a logistical series of bumps and relocations. “Oh, sorry, didn’t realize you’d booked it,” you say. So you gather papers and move somewhere else, for a short time, at least.

Scheduling resources is possibly one of the most tantalizing opportunities offered by technology, but even the simplest of options remain outside our algorithmic grasp. Sure, options exist for meeting room scheduling — that’s not the point (and it leads to that other, standard part of office life — the “didn’t you check it was free on the calendar?” conversation).

Scheduling ourselves is also a huge challenge. This isn’t helped by the oh-so-close-but-not-quite compatibility between calendaring services from Microsoft, Google, and Apple et al. Those working within an organization can stick with one system (I can remember Microsoft saying the biggest problem it had to overcome, way back when it launched Exchange 5.5 was managing free/busy information). But right now, beyond sharing iCal data, there is little guarantee that a resource will be allocated.

Clearly nobody sent the memo that we’re working in ecosystems, not enterprises these days. Even if your company has a corporate standard, as soon as you try to book a meeting with a client or a supplier, you hit the challenge of knowing whether they actually received the meeting request. Oh, woe to all such services and a plague on their houses, as they try to drag you into a Skype session when no such facility exists, or offer a shared space only accessible to a subset of participants.

It should all be so simple, but it quite clearly is not for a number of reasons. First the complexity of resource scheduling, even for something as simple as calendaring meeting rooms, is beyond the ken of most of the tools corporate users have at their behest. Supply chain, fleet management and spares management companies have capabilities to deal with their specific needs, but these are as expensive as they are custom, and not suitable for the masses.

Not suitable yet, that should read. Consider Workwave for example, which offers field service management for organisations who rely on scheduling people to deliver their services, from home care to pest control, but whose heritage is in office efficiency. From Workwave and others we can see how field service is maturing, adding features such as video clips to be added to an invoice to show a job is complete, or location based mechanisms to enable more efficient routing.

In parallel, field service is looking towards analytics, to identify ways to become more efficient (or most likely, identify places where they are being inefficient) — such as using routing data to identify repeated bottlenecks. “You can get previews into customer experience hotspots, where possible issues are,” says Workwave’s CEO, Chris Sullens. The consequence is better orchestration, to the benefit of all parties — company, employee and customer. “For some categories, e.g. maid service, we can take a planned activity and make it on-demand,” continues Chris.

In other words, field service resource scheduling can move to real-time if the systems are there to manage it, increasing flexibility without adding cost. Such features clearly have a place beyond field service, as do more leading edge capabilities such as use of wearables. For these advanced capabilities to be effective however, they need to be open and integrated. “The challenge is, you want all or nothing,” explains Chris. “You’re going to have to hit a tipping point otherwise not going to get a benefit.” This is as true in the office environment as a food plant.

Technology has a habit of commoditizing, with features jumping from vertical-specific areas and into what might be termed ‘the mainstream’. In this case, our ability to orchestrate our time with each other and the resources we use appears tantalizingly close. But until either the majors address their compatibility issues (guys, nobody is going to walk away from your software if you integrate each others’ scheduling tools, really they are not), or somebody else comes up with an generalized mechanism for managing such things, we’ll have to stick with coping strategies.

Oh, sorry, did you book this meeting room? I’m just leaving…


My Research Agenda for the Fall 2016 and a Call for Participation

After a few weeks of failed vacation — my attempts at unbroken away time kept surrendering to business-related meetings and travel — I’m ‘back’ (as if I ever was ‘away’).

During the few hours of contemplation that I managed, I determined to announce a few research initiatives that I’ll be kicking off, on the heels of finishing up the Work Management Narrative Baseline, and Marketing Goes Agile.

  • Work File-Sync-and-Share — I am rebooting the former Enterprise File Sync-and-Share project which was started early in 2016, but which was sidelined for a number of unforeseen reasons. We will be reviewing the leading products in the market, and exploring the trends and changes in the marketplace.
  • Workplace Communications — from the prospectus:

Workforce communications is a class of message-centric work technology that is principally geared toward the modern mobile workforce, especially geared toward enabling communications between workers in retail, manufacturing, transport, security, and construction. These are mobile-first applications, although they also support other enterprise functions, but with an emphasis on the efficient functioning of the mobile worker, often working outside the typical workplace, and in particular, often without access to PCs. They incorporate elements of messaging, chat, social media, and file sharing, as well as more workforce specific capabilities like shift scheduling, calendaring, task management, and other functional tools.

I’ve been talking with a broad spectrum of vendors, and we will likely launch this in October.

  • Work Intranets — Intranets are private, content-centric networks used to provide communications and work-related services to internal knowledge-worker teams. We will likely launch this in December.

In all reports, we are taking a new approach. In each case, we are creating a research note that provides a concise take on the niche, and sets context for the trends and ‘bends’ (countervailing influences that block adoption) in the marketplace in question. I will take the lead role in these research notes, possibly supported by other researchers.

Call for Participation

And, about other researchers: We are taking a different approach to the Gigaom research network. We’ve had limited success with the model we more-or-less inherited from the old Gigaom, which was to rely on a network of self-motivated freelance researchers. This has had mixed results (at the most generous) and in some cases has led to extra effort due to the lack of alignment and integration in research processes. As a result, we are going to phase that approach out, and adopt a different approach.

We will — going forward — bring aboard only those researchers who want to transition into being full-time staff at Gigaom. While those researchers will start as freelancers, it is our intention to only work with senior analysts that we believe will make great contributions as full-time employees. Yes, they would start with a short period of freelance involvement, like three to six months, so we can be certain of the fit. And, to start with, we will be limiting ourselves in the next months to finding two or three senior analysts who could operate very quickly as Research Directors, managing a research domain for the business, developing new business, and hiring other analysts. So the network going forward will be transitioning from a large number of relatively uninvolved freelancers to a smaller network of full-time researchers where freelancers are involved only as an on ramp to a full-time job.

I am actively seeking one or perhaps two prospective research directors to work with me in the work technologies and future of work domain, and we are open to more candidates in other domains we are interested in, like internet of things, big data, cloud computing, artificial intelligence, and others.

At the present time, we are looking for contributors based in the United States, exclusively, people with considerable experience in their domain as analysts, consultants, or journalists (with an analytic bent). We are likely to expand that to other regions in time.

If you are interested in discussing the research agenda, or in participating in the new Gigaom research network, please contact me by email.

More Stellar Visionaries To Join Gigaom Change 2016 Lineup

If you’ve been following us, you’ll know our mission with Gigaom Change is to add real value to the business leaders who are building our collective futures by delving further into the seven most disruptive enterprise technologies that are widely known but little understood: Robotics, Nanotechnology, AR/VR/MR, Human-Machine Interface, 3D+ Printing, AI and Cybersecurity.

To do this, we have been carefully curating the top three experts in each industry and have invited them to share their ideas and experience on a Keynote Panel. We’ve worked overtime to bring together a stellar lineup devoid of often-hyped CEOs of unicorn companies. Instead, we’re bringing you the visionaries that have their heads down making big R+D/proof of concept strategic investments that will make this rapidly advancing technological future real for all of us. Each of these seven technology panels will include:

  1. An industry leader who will explain why this technology is emerging right now and what the current state of the art really is.
  2. An industry practitioner who will be able to really talk to what it’s like to bring these technologies in-house and address so many of the questions the rest of us have about the impact it is having right now.
  3. An industry visionary who will take us into the next 5-20+ years and help us imagine what is possible moving ahead, as well as point out some of the big questions we all will need to prepare for, such as ethics and privacy, shifting intellectual property rights and regulations, the need for new business models and more.


After months of hunting for just the right people we’re excited to announce four more incredible speakers who will help us all better understand what this technology is, why it matters and where it’s headed:

Liam QuinnJoining our AR/VR/MR panel, Liam Quinn is VP, Senior Fellow & CTO for Dell, responsible for leading the development of the overall technology strategy. Key passions are xReality where Quinn drives the development and integration of specific applications across AR & VR experiences, as well as remote maintenance, gaming and 3D applications.

Niloofar RaziNiloofar Razi is SVP & Worldwide Chief Strategy Officer for RSA. As part of the Cybersecurity panel she brings more than 25 years experience in the technology and national security sectors, leading corporate development and implementation of investment strategies for billion dollar industries.

Michael PetchMichael Petch is a renowned author & analyst whose expertise in 3D+ printing will bring deep insights to advanced, additive manufacturing technologies on our Nanotechnology panel. He is a frequent keynote speaker on the economic and social implications of frontier technologies.

Josh SuttonJosh Sutton is Global Head, Data & Artificial Intelligence for Publicis.Sapient. As part of the AI panel Josh will discuss how to leverage established and emerging artificial intelligence platforms to generate business insights, drive customer engagement, and accelerate business processes via advanced technologies.

Join us at Gigaom Change 2016 Leader’s Summit on September 21-23 in Austin, Texas to explore some of the highly disruptive ways these seven key technologies are shifting enterprise business.

Tickets are strictly limited to 300+ attendees and we still have spaces left at a discounted rate for those who register early. We encourage you to take advantage and avoid a higher ticket price as we count down to this landmark event. Reserve your seat today.

Price Grid

Gigaom Talks with Rodolphe Gelin about Robotics

Rodolphe GelinAs a graduate of the School of Civil Engineering (l’Ecole des Ponts et Chaussées) and with a DEA in Artificial Intelligence, Rodolphe Gelin, EVP Chief Scientific Officer, SoftBank Robotics, has 20 years of experience in research with teams in the Commissariat à l’Energie Atomique (CEA)- most notably in robotics used to assist people. Rodolphe joined SoftBank Robotics in 2008 as Director of Research and Head of Collaborative Projects. He is also the head of the ROMEO project to create a large robot designed to assist the elderly. Rodolphe Gelin now leads the Innovation team which aims to develop new technologies for current robots and continue exploration of humanoid robotics.

Rodolphe Gelin will be speaking at the Gigaom Change Leaders Summit in Austin, September 21-23rd. In anticipation of that, I caught up with him to ask a few questions.

Byron Reese: People have imagined robots for literally thousands of years. What do you think is the source of this fascination?

Rodolphe Gelin: The idea of robots has fed our curiosity for more than half a century with the potential of having another form factor to interact with on our own natural terms. In reality robots offer us much more and, through recent technological innovations, robots are now helping us reach the next frontier in artificial intelligence research and engineering. As mankind has evolved our inherent nature is to create, build and then continue to evolve what we’ve created; advancing in technology is no different. As high tech has become more prolific, we are constantly on a quest to evolve our technical knowledge. Robotics represents the next step or extension of that continued innovation.
It is obviously extremely difficult to make a robot with the range of motion and movement as a human. What are some of the things that make it tricky?

Humans have evolved very specific types of muscles which provide us with strength, force and speed that currently no available motor can match. The complexity of the human skeleton gives us extraordinary mobility and support. This type of support is also very difficult to duplicate mechanically. In addition, to creating a general form factor for humanoid robots there are other details to consider like fluidity and other natural movements that require intricate programming, as well as various sensors and processors that would help identify the robot’s surroundings.
Do you foresee robots that are indistinguishable from humans? Would people want that?

At SoftBank Robotics, we strongly believe that robots should look like robots. Our robots, Pepper, NAO and Romeo, were created to resemble a human-like figure, but they do not look like us. While there are indeed some robotics scientists today who have created robots that look like humans which include features like eyes, and “skin,” similar to a wax figure. That is not where we are headed with our development of robots. All robotics research is ongoing and as each form factor becomes more advanced some robots could look very much like their human counterparts. However, SoftBank Robotics is focused on creating approachable robots that make people feel comfortable and happy.

Do you think that the computing power of robots will eventually be such that they attain consciousness?

I don’t think that a machine that plays chess will attain consciousness even with a lot of computing power. If consciousness is someday available in a computer, it will be because a human being would have programmed it, in one way or another. A robot is a machine that does what it has been programmed to do. If the programmer does not program a replicated state of ‘consciousness’, there would be no way for the program get one.  Having random ‘conscious-like’ processing could be seen in the form of a computer glitch but the software designer should detect and correct it, if he is not happy with the behavior. And if a developer wants to give a consciousness to his robot, he probably can. But what would be the purpose of it? To give a moral sense to the robot? Do we really want to have a machine judging the morality of what we are doing? I don’t think consciousness is a question of computing power, it is just a question of design.

Thank you for taking the time to share your thoughts on this subject. I look forward to further discussion in September.

Rodolphe Gelin will be speaking on the subject of robotics at Gigaom Change Leaders Summit in Austin, September 21-23rd.

End to End Monitoring Tames the SDN Beast

In many cases, a Software Defined Network (SDN) can create as many problems as it solves. After all, SDNs can be complex beasts that need constant care and attention. That said, the drive behind adopting SDN technology is skyrocketing. Gartner reports that by the end of 2016, more than 10,000 enterprises will have deployed SDN technology on their networks, a tenfold increase from end-of-year 2014.

Despite the rapid adoption of the technology, many enterprises have encountered failures when trying to transition to the technology. The reasons are numerous, with complexity defined as one of the major reasons for failure. Yet, there is another issue under the subtext of SDN deployment, one that derails the transition from pilot project to deployed technology. That issue can be summed up as unpredictable performance. While SDN networking performance seems to be adequate during the pilot project stage, an actual deployment reveals issues that directly impact the end user. In other words, many SDN deployments are derailed by end user experiences.

Detecting and understanding those performance issues are currently a major challenge for those working with SDN technologies, throw WAN connectivity and application servers into the mix, and troubleshooting performance issues becomes an impossibility.

Several vendors are attempting to address that problem with optimization appliances, which can improve performance and solve some of the performance problems. However, to truly deal with performance issues, one needs more that optimization, they also need visibility in the form of end to end monitoring. Many vendors have focused their WAN optimization solutions on the WAN to LAN to WAN ideology, making sure that the networks are able to communicate effectively. However, it is often what happens at the endpoint is what really defines success.

In other words, your WAN and SDN can be the fastest in the world, but if that performance doesn’t translate to the applications and the endpoints accessing them, then your SDN deployment may be classified as a failure.

Something Nik Koutsoukos, VP of Product Marketing at Riverbed has become keenly aware of. In an interview with GigaOM, Koustsoukos said “performance is often judged by how the end user is impacted, something that is really evident when the endpoints are running virtual desktops that are hosted on the corporate network.”

Koustsoukos added “To address performance issues, it is critical to be able to understand what is happening on the endpoint and the related applications, the only way to do that is to have end to end visibility.”

With that in mind, Riverbed has acquired Aternity, a company that specializes in end user experience monitoring. Koustsoukos said “Aternity brings with it the ability to analyze end to end application performance and how availability, latency, execution and quality are perceived by the end user.”  Koustsoukos further said “By incorporating Aternity’s features into Riverbed SteelCentral, administrators will have complete visibility in an enterprise’s application delivery ecosystem, a capability that will be sure to ease troubleshooting and enhance performance.”

For those looking to optimize WAN connectivity, deliver networking via SDNs, and improve the end user experience, Riverbed seems to have hit the nail on the head. One can only hope that the integration of such diverse capabilities will ultimately move SDN further ahead, while resolving the annoying performance issues that hamper end user productivity.




Cloud Databases for Enterprise Workloads

Due to its complexity, cloud strategy will be the most challenging business driver in the years to come.

Take everything you need to consider in the purchase of every single piece of software. Now add a dimension that should be coordinated with every other piece of software installed or being considered. That dimension is the cloud. These days, not being able to include cloud-based software in your systems is tantamount to cutting off your arm.  Some software is strictly available in the cloud.

Certainly, with the data sizes involved in databases today, the cloud makes the utmost of sense for storing large, fast-growing amounts of non-sensitive data for which it is extremely difficult to estimate its ultimate size.

However, the cloud is not the cloud is not the cloud. The cloud is very multidimensional. As such, it is among the most abused terms in our industry.

The cloud can be public or private.  It can be software, infrastructure, or platform as a service.  It can be single- or multi-tenant (with tenants being other departments or other companies).  Also, a wide variety of application areas can be deployed on the cloud.

At the least, cloud means virtualization -– the abstraction of developers from their hardware internally. I have seen many companies that begin a quasi-private-cloud journey with the system administration group creating more of an SLA-based approach to their duties: “Give us your requirements and we’ll provide the machine(s).”

Server management is one of those technologies that is not considered a core competency of developing applications. It requires unique skill sets. So it is increasingly being delivered by service level to applications, including information management applications that need servers to function, which is most of them.

This presents a reality of co-existence between on-premise, private cloud and public cloud solutions, which will be the norm for the remainder of our careers.

GigaOm is conducting original research into this rapidly changing space. We are taking a close look at relational databases in the cloud: databases built for the cloud, databases that have been reengineered to work in the cloud, and databases that can reasonably be deployed in the cloud. We will publish our findings in an upcoming report.
The main cloud benefits for databases are the following:

  • Flexibility to scale computing resources
  • Ability to shorten implementation windows
  • Reduced cost
  • Testing, development, and sandboxing environments
  • Geographic scalability

Database deployments in the cloud are at an all-time high and are poised to expand dramatically. We are looking at vendors who offer these value propositions to enterprises.

In terms of architecture, an organization considering a cloud database must determine its fit in the overall architecture of their current infrastructure and information ecosystem. When architecting for current and future data needs, there are key considerations across the board including data sources, change data capture, windows for extraction, data integration, security (as well as privacy and encryption), data validation, quality assurance, and so on.

The report will also present a fast-track methodology for landing and expanding databases in the cloud for an enterprise.

The cloud has been a disruptive technology that many companies have leveraged to maintain or gain momentum as a company. The cloud offers opportunities to differentiate and innovate with database systems at a much more rapid pace than ever before possible. Cloud storage costs less, enables more rapid server deployment and application development, and offers scalability.

If your value proposition to the market includes database services in the cloud, you are invited to get in touch and share that proposition. Contact Gigaom regarding participation in our research.

Will the Olympics Break Your Internet?

The worldwide games are sure to have an impact on the world wide web, however judging that impact proves to be more of a science than an art. After all, predictions can be based upon what happened four short years ago, when the 2008 Olympics generated some 628 million video streams and 8.2 billion page views.

Nonetheless, NBC is claiming that the 2016 Olympics will be its “most live Olympics ever”, with over 4,500 hours of coverage streaming to billions of potential viewers. What NBC fails to say is how all that traffic will impact corporate networks. While there is little worry that the Olympics will break the internet, the amount of content available may surely have an impact on business networks, as workers strive to see the latest scores and live stream events.

The potential productivity drain created by the Olympics has many businesses a little worried, after all those businesses want their employees attending to their chores and not attending (virtually) Olympic events. However, the worries should go beyond lost productivity due to workers watching events, the potential for network disruption may raise its ugly head and bring network performance to a crawl, impacting every one and not those just sneaking a peek at a gold medal ceremony.

A recent survey by Riverbed, a WAN optimization vendor, predicts that the bandwidth consumed by the 2016 Olympics will far outpace what was experienced just four years ago at the Beijing based 2008 Olympics, bringing additional doubt that business networks will be able to smoothly handle the strain.

However, there are some simple steps network administrators can take to mitigate bandwidth sapping issues before they bring traffic to a crawl. Respondents to Riverbed’s survey prove to be a fount of actionable information and are offering the following statistics:

  • 69% of those surveyed claimed to have one or more network issues due to employees accessing online games content
  • 85% are expecting to more closely monitor the performance of their networks and applications
  • Only 43% are confident that they will be able to safeguard critical applications during high network demand
  • 12% report that they lack the confidence that their networks will be able to handle the added strain and traffic
  • 70% will limit employees from accessing games content through the network

With the Olympic stage set, network administrators are going to have to deal with the traffic demand proactively, and get their hands a little dirty with the plumbing of their networks. To head off a potential network traffic disaster, Nik Koutsoukos, VP of Product Marketing at Riverbed is offering some free advice.

In a phone interview with GigaOM, Koutsoukos offered that network managers should consider doing the following:

  • Implement real-time end-to-end monitoring for visibility into the network and applications
  • Prioritize and reserve bandwidth for critical applications
  • Distinguish between company assets and BYOD
  • Bring siloed IT teams together to plan for any high priority network events

Koutsoukos’ advice rings true, to be able to manage the bandwidth monster, administrators need to have full visibility into what is exactly happening on the network and how critical applications are being impacted. With that knowledge, administrators can craft bandwidth policies that can strike a happy medium between critical applications and ancillary network usage.


Why Fear AI?

At Gigaom Change Leader’s Summit 2016 in September, we will be exploring seven key business technologies that are real today. One of these is AI. In anticipation of that event, we are doing a series of short pieces on the featured technologies.

In 1950, Alan Turing asked the question of whether a machine can think. In 1955, John McCarthy coined the term “Artificial Intelligence.” In 1956, a group consisting of McCarthy and three other AI scientists proposed a summer retreat to work out the essentials of the new field.

The optimism ran high that a thinking machine could be developed relatively quickly, even with the technology of that time. This optimism proved to be unfounded.

AI has endured a number of so-called “winters” where funding dried up due to a disconnect between the expectation of the funders and the realities of the science. But all the while, the power of computers and computer languages advanced such that for the first time, the expectations aren’t just being met, but being exceeded.

Now, some wonder if we have inadvertently begun to act out the drama in Mary Shelley’s Frankenstein, creating something that either through intention or accident wreaks havoc on our world. Some call it an existential threat.

But should we worry? Let’s dive in.

First, those suggesting caution:

Physicist Stephen Hawking believes development of a true AI would be no less than “the end of the human race.” According to Hawking, the fundamental problem is that AI would iterate and advance rapidly, whereas we “[h]umans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Tesla CEO Elon Musk is equally bearish on AI, calling it the “biggest existential threat” facing mankind, “potentially more dangerous than nukes.”

Microsoft co-founder Bill Gates is more restrained, saying he’s “in the camp that is concerned about super intelligence,” because decades of accelerating progress may result in an intelligence so advanced that it’s hard to control.

Now, let’s hear from the other side:

Linguist and behavioralist Noam Chomsky, far from panicking about malevolent AI, thinks the entire pursuit of a statistically-driven AI is barking up the wrong tree. He says it’s a shallow approach which may have practical applications but is unlikely to yield real insight into the nature of intelligence or cognition.

AI author Ray Kurzweil isn’t afraid of artificial intelligence, either. He analogizes to other scientific endeavors with doomsday potential, like biotechnology, observing that safety rules and ethical guidelines have kept those technologies relatively safe.

And finally, deep learning pioneer Andrew Ng actually mocks the idea of an ‘evil killer AI,’ downplaying concerns as a “phantom” and an unnecessary policy distraction. To Ng, “[w]orrying about AI evil superintelligence today is like worrying about overpopulation on the planet Mars. We haven’t even landed on the planet yet!”

So what is the net of all of this? Well, when the estimates from smart people as to when we will have AGI vary from five to 500 years, there is obviously neither clarity nor consensus about the future. One thing seems certain: There is no turning back. Whatever is possible will be done.

Join us at Gigaom Change Leader’s Summit 2016 for a more in-depth look into Artificial Intelligence.