Category Archives: Know How

Busybot: A ‘Parasitic’ Task Management Tool For Slack

Because Busybot and Slack look so much alike and are so tightly connected, I avoid the cognitive costs of switching.

I’ve tried using work management tools like Asana in connection with Slack, and the results have been mixed, principally because — I think — there is a mismatch in the basic orientation of the tools: Slack is messaging centered, while Asana is task centered.

In the case of a tool like Asana, when the Slack connection is used notifications are sent to a Slack channel whenever changes occur in the Asana workspace. For example, whenever a task is created, completed, or commented upon. A slash command (‘/asana’) lists tasks, and arguments to the command can lead to creating tasks, assigning tasks, and commenting on them.

Asana integration in Slack

But I confess that I have found this style of integration difficult. The two models of use — chat-based conversation in Slack and task-based coordination in Asana — don’t align for me, and the mapping from an Asana workspace to a Slack channel doesn’t always line up right. And I don’t necessarily want to have every tweak to a task dumped into the channel in Slack, per se. I don’t want that endless stream of noise, because Slack is noisy enough.

I recently encountered a tool that takes a different tack. Busybot avoids the mismatch problem by operating in a parasitic way. By that I mean it relies totally on Slack’s architecture to the greatest extent possible. For example, there is no independent login: you use Slack’s login. And once logged in, the channels of the team that you sign into are duplicated as contexts for tasks in Busybot.

Here’s the login:

login for Busybot

Here’s the #general channel for in Slack. You can see that I /invited busybot to the channel (I had already created the integration).

Inviting and Creating a Task

I typed a message to busybot, ‘ask Esko for a contribution’. If I had added ‘@stoweboyd’ that would have assigned the task to me, as well. Slack team

Over in Busybot, everything looks extremely similar:

Task in Busybot

On the left, the design of Slack is emulated, so that for each Slack channel there is an equivalent Busybot channel, where all tasks can be found. I’ve selected the ‘ask Esko’ task, and then the task pane opens. I’ve selected the ‘add checklist’ feature.

Task Checklist

I’ve added a single checklist item, but you can have as many as needed. Also descriptions, comments, deadline, and assignment of the task are available as metadata.

The task list can be sorted, which is moot in this case, since there is only one task:

Also note that the ‘@stoweboyd’ option at the top opens all the tasks assigned to me, and ‘all tasks’ opens all tasks in the team, sorted by channel.

Tasks can be added, edited, and deleted in Busybot, but can only be created and displayed in the Slack side of the integration, at present. I’ve been told by Busybot’s CEO and founder, Damian Bramanis, that various new features are coming, like multi-team functionality, new ways to groups tasks in views, and tags.

Conclusions and Takeaway

Busybot works for me despite the minimal degree of metadata, and I think the reason is the equivalence between the Slack and Busybot information models: I don’t have to switch gears mentally when I move from Slack to Busybot, or vice versa. It feels like I am in the same place, just looking at different attributes of the same system of information. Moving from Slack to Busybot feels like I am just zooming in on task details that are suppressed on the Slack side. Because the two ‘sides’ look so much alike and are so tightly connected, I avoid the cognitive switching costs of moving from Slack to non-parasitic tools, like Asana.

Yes, I’d like to be able to do more with Busybot, though. For example, I’d like to be able to change task attributes on the Slack side, like adding a comment to a task, so that the text of the task comment would appear both in the Slack chat history and in the task comment thread. Damian tells me they are working on ways of accomplishing more sophisticated sorts of integration like that, perhaps with a /busybot command, or clever use of the channel topic (setting the topic to the name of a task, for example, so that commands could refer to that task).

At any rate, I will be watching the developments at Busybot with close attention.

Crossposted 1 May 2016 on

Update 1 May 2016 4:30pm: Several folks mentioned Swipes for Slack, as another approach to accomplish some or all of what Busybot does. I will review in another post.

A new era at Google? Hardware moving to the foreground?

Yes, I am breaking one of my own unwritten rules: putting two question marks in a post title. But this story warrants it, particularly since what I am writing about won’t get much play.

Rick Osterloh, the former head of Motorola, left Lenovo a month ago, the owner of the brand. Google, you may recall, had acquired Moto and kept a pile of patents and an advanced technology group, and spun the rest off to Lenovo. Apparently, Sundar Pinchai, Google’s CEO, thinks that Osterloh is the one to make sense of the many, many hardware efforts that Google has found itself running.

So Osterloh will be overseeing Google’s Nexus, Chromecast, laptops and tablets (Chromebooks and the Pixel C tablet), OnHub (the home router that is the camel’s nose under the tent flap of the living room), ATAP (the advanced technology and projects group, with efforts like Project Ara), and (drumroll) Glass.

Yes, Glass. Remember Glass? I have said that putting Glass under Tony Fadell (CEO of Nest) would lead to its re-release as a formidable player in what is likely to be the next platform: augmented reality. Fadell is having a lot of trouble since Google acquired Nest, and Glass has remained in the shadows.

Google is still best-positioned to bring AR to prominence with something derived from Glass. Maybe Osterloh is the one who’ll make it happen. But sooner or later, the next era of computing will arrive, and after that day all of us will be wearing Google Goggles — or something very like — and nothing will ever be the same.


Bob Metcalfe to Keynote at Gigaom Change in Austin

metcalfeOne of the nice things about the Internet Age being relatively new is that many of its earliest pioneers are not only still around, but still doing interesting new work. Among these titans, few loom as large as Bob Metcalfe. Inventor of Ethernet. Coiner of Metcalfe’s Law. Founder of 3com.

Bob was there in the early days at PARC, and today you can find him at University of Texas promoting entrepreneurship and startups, and keeping his eyes open for the next big thing.

When considering keynote speakers for Gigaom Change, an event about the present disruption of business through new technology such as AI and robots, I wanted to find someone who had seen a new technology arrive at the very beginning and then ushered it through to commercial success, and finally helped to make it impact the entire world.

I had a short list of candidates and Bob was at the top. Luckily, he said yes.

I caught up with him Monday, April 25, and all but ambushed him with a series of questions about the kinds of changes he expects technology to bring about next.

Byron Reese: So I’ll ask you the question that Alan Turing posed way back: “Can a machine think?”

Bob Metcalfe: Yes, I mean, if human beings can think then machines can think.

And so, you believe we’ll develop an AI.

Yes, absolutely. The brain consists of these little machines, and eventually we’ll be able to build little machines and then they’ll be able to think.

Do you have an opinion on what consciousness is?

It has something to do with attention. That is, focusing the activities of the thinking machine; focusing them in on a certain set of inputs, and that’s sort of what consciousness is.

Do you think we’ll make conscious machines?

Yes. An interesting case of consciousness is when the selected inputs, that is the ones selected for attention are internal, that is self-consciousness—being able to look down on our own thoughts, which also seems to be possible with some version of a neural net.

Would a conscious machine have inalienable rights?

Whoa! Do human beings have inalienable rights, I’m not sure.

We claim we have a right to life and it’s generally regarded there are things called universal human rights.

That’s a conflict of interest because we’re declaring that we have our own rights. Actually, it worries me a little how in modern day life, the list of things that are ‘rights’ are getting longer and longer.

Why does that worry you?

It just seems to be more a conflict of interest. Sort of a failure to recognize that we live in a reality that requires effort and responsibility, and ‘rights’ somehow is a short-cut, as in we have a ‘right’ to stuff as opposed to having to work for it.

Do you believe that robots and AI will be able to do all the jobs that humans can do?

I think so, I think that’s inevitably the case. The big issue as you well know is whether it’s man-versus-the-machine or man-and-the-machine, and I tend to come down on the ‘man-and-the-machine’ side of things that is, humans will be enhanced by their robots not replaced by their robots.

So, some kind of human-machine synthesis like augmented memory and all of those sorts of things.

Well, we have that already. I have the entire Google world at my disposal, and it’s now part of my habit when something comes up that can’t be remembered, I quickly take out my iPhone and I know what it is within a minute. You know, like, ‘Who was Attila the Hun,’ that came up the other day, and they can read the entire life of Attila the Hun within a minute. Although the interface between Google and my thought process is awkward between typing and reading. I can imagine eventually that we’ll have Google inserted in our head more efficiently. And then it won’t take 10 years to learn French, it’ll take just a few minutes to learn French because you’ll just ‘plug it in’.

What do you think people will do in the future if machines and AI’s are doing all the things that have to be done.

I don’t know. I guess, you know, a hundred years ago everybody knew how to milk cows—well, 40 percent of the population knew how to milk a cow. And now, you know, the percentage of people who know how to milk a cow is pretty small and there are robots doing it. And somehow all of those people managed to get employed in something else, and now they’re UX/UI engineers, or they’re bloggers or they’re data scientists. Somehow all those people stopped milking cows and they started doing something at a higher-level in Maslow’s hierarchy.

There’s two potential problems with that though. One is if the disruption comes too quickly to be absorbed without social instability. Or, the second problem is in the past we always found things to do because there were things we could do better than machines could do. But, what if there’s nothing we can do better than a machine can do? Or are there things only people can do.

You’ve wandered out of my area of expertise. Although, on the ‘happened too quickly’ front, as we’re seeing in Austin this week, the status quo can slow things down like the Uber-Lyft slow-down initiative here in Austin. We like taxis here in Austin rather than Uber and Lift apparently because they’re safer.

What are you working on? Enough about the big issues, how do you spend your days?

I spend my days on the big issues, and the big issue is innovation as a driver of freedom and prosperity; and the tool of innovation that I’ve settled on perfecting and promoting and professing is startups. Startups as vehicles—as innovation vehicles—and mostly coming out of research universities. So most of what I do is focused on that view of the world.

Why did you choose startups as the mechanism of innovation?

Because startups, in my experience, have been the most effective way to innovate. Everyone loves innovation as long as they’re not being innovated upon, and then as soon as they’re innovated upon they become the status quo, which is resourceful and nasty and mean. And, so the most effective tools in my experience against the status quo have been these startups, which at their core are champions of innovation. I got the word champion from Bob Langer at MIT; he believes these new technologies need champions, which is why he likes startups. A startup is a place where champions go and gather resources, coordinate their efforts and scale up. So, I guess it’s their effectiveness in having real impact with innovations that causes me to admire and profess startups.

It’s interesting though that as much as what you call the status quo can slow down innovation, nothing can really ever be stopped can it? I mean, big whale oil didn’t stop kerosene and big kerosene didn’t stop electricity.

The rate of advance can be slowed. The internet is old now, it started running in ’69. Just think how many years have passed, 50 years, to get where we are today. Is that fast or slow, by the way?

I would say that’s very fast. We’ve had recorded history, and by that I mean writing, for 5000 years. We have only therefore had the Internet for 1% of recorded history. Are you overall optimistic about the future that all these new technologies and startups are going to usher in? Do you think it’s going to be a better future, or not?

I’m a better-future believer, an optimist, and enthusiast. I think cynics are often right but they never get anything done. Just as a matter of choice, without assessment, I choose to be optimistic.

Last question: Aren’t startups fundamentally irrational in the sense that the likelihoods of success are so small and the risk so high that one has to be somewhat self-deluded to undertake one? I ask this, of course, as someone who has done several.

Maybe that circles us back to your big question before, maybe that’s what makes us humans, is that we need to delude ourselves to thereby make progress. Maybe robots won’t do startups because they’re too rational.

Everything You Know About the Stack is About to Change

I am at the OpenStack Summit here in Austin and the announcements and releases keep rolling out, illustrating that the growing OpenStack market has some real teeth, taking a bite out of the market standbys. Even so, there is still a great deal of fear, uncertainty and doubt around the viability of clouds built upon OpenStack. The real question here is if that FUD is unfounded for today’s emerging markets.

That means taking a closer look at OpenStack is a must for businesses delving further into public, private and hybrid clouds.

The OpenStack Project, which is now managed by the OpenStack Foundation, came into being back in 2010 as joint venture between NASA and RackSpace Hosting, with the goal of bringing collaborative, open sourced based software to the then emerging cloud market. Today, the OpenStack Foundation boasts that some 500 companies have joined the project and the community now collaborates around a six-month, time-based release cycle.

Openstack, which is basically an open-source software platform designed for cloud computing, has become a viable alternative to the likes of Amazon (S3, EC2), Microsoft Azure and Digital Ocean. Recent research by the 451 Group has predicted a 40% CAGR, with the OpenStack Market reaching some $3.5 billion by 2018. Enough of a market share to make all players involved take notice.

However, the big news out of the OpenStack Summit Austin 2016, comes in the form of product announcements, with more and more vendors aligning themselves with the platform.

For example, HPE has announced its HPE Helion OpenStack 3.0 platform release, which is designed to improve efficiency and ease private cloud development, all without vendor lock-in problems.

Cisco is also embracing the OpenStack movement with its Cisco MetaPod, an on-premise, preconfigured solution based on OpenStack.

Another solution out of the summit is the Avi Vantage Platform from AVI Networks, which promises to bring software-defined application services to OpenStack clouds, along with load balancing, analytics, and autoscaling. In other words, Avi is aiming to bring agility to OpenStack clouds.

Perhaps the most impressive news out of the summit comes from Dell and Red Hat, with the Dell Red Hat OpenStack Cloud Solution Version 5.0,  which incorporates an integrated, modular, co-engineered, validated core architecture, that leverages optional validated extensions to create a robust OpenStack cloud that integrates with the rest of the OpenStack community offerings.

Other vendors making major announcements at the event include F5 networks, Datera, DreamHost, FalconStor, Mirantis, Nexenta Systems, Midokura, SwiftStack, PureStorage, and many others. All of those announcements have one core element in common, and that is the OpenStack community. In other words, OpenStack is here to stay and competitors must now take the threat of the open-source cloud movement a little more seriously.




Mobile Strategy and ROI Calculation

When it comes to developing a successful mobile strategy, and building a long-lasting relationship with customers, a CMO is often faced with difficult considerations around the best way to measure success.

The process of creating an app and investing significant amounts of money acquiring users is no longer enough to remain on the ‘first screen’ of any given mobile device — which is where any organization ultimately needs to be. It’s become necessary for teams to focus their efforts on techniques and campaigns that won’t only secure installs, but will maintain loyal relationships with mobile users.

There has been ample research conducted on determining ROI on the vast amounts that businesses invest in acquisition. Now though, it’s becoming increasingly apparent that the same attention should be given to money spent post-install as well.

Features of an engagement strategy such as push notification campaigns, in-app messaging, user-experience A/B testing, are all techniques you’ll need to invest in to help deliver successful mobile relationships. Now all you need to do is demonstrate that there is a greater need for money spent here rather than elsewhere…

Image source
So, if you’re the CMO in this situation, how do you prove this effectiveness and need?

Well, after adopting some form of mobile marketing platform to handle this task, you would hope that your mobile analytics will change. You may see improvement in your engagement, retention, and ultimately, your revenue numbers. Perhaps obviously, this is the first and easiest way to consider ROI.

Once you get a grasp on it, and you begin to see these numbers change, calculating ROI is relatively easy. Think of it this way — if we grow a metric like average revenue per user (ARPU) from $5 to $10 using a marketing automation program, and we have 1 million monthly active users, then we can put $5 million per month into the credit column. If the total monthly spend on the program amounts to $100,000, then that will result in a very (very!) satisfactory 900% ROI.

Granted it won’t always be ARPU that we’re measuring, but in the vast majority of cases, there will be metrics with which we will measure mobile success, and once we add a quantifiable value to these, we’ll be able to establish decent ROI estimates.

The Campaign Level

Another, perhaps more reliable, way to measure ROI is to focus specifically on individual campaigns. Doing this will allow you to measure the effect of any changes within specific campaigns and sum them to provide a total benefit.

Assuming that you’re using a good marketing automation platform, you should get clear results from each individual campaign, against whichever metrics you choose to use, and compared to the control group in order to isolate for other variables. By combining these multiple campaigns, we have a cumulative benefit that can be used to calculate ROI on the overall spend. Of course this approach won’t necessarily enable you to take account of some benefits, such as the effect an overall improved experience can have on word-of-mouth – but it’s probably better to be conservative when calculating ROI anyway.

One thing that is vital to remember: don’t go looking for evidence of ‘good results’ after you’ve ran the campaign. Human nature being what it is, you’ll probably find some. The key is to first identify the metrics that you want to have an impact on, and the effect that you hope to have before you implement the campaign.

The Symbiotic Relationship of the OSI Model and Application Performance

It’s no secret that Application Performance Monitoring (APM) is becoming a critical competency in today’s enterprise networks. After all, so many enterprises are moving to a cloud based model that leverages tiers of service, which brings unforeseen complexity into the task of keeping things running smoothly.

Traditionally, most IT managers have thought that application performance was a direct result of the static resources available, focusing on elements such as processor performance, available ram, and perhaps the traffic saturation associated with the Local Area Network (LAN). Although monitoring those elements still remains critical for providing adequate performance, the APM game has changed, or more appropriately evolved, to something that must address much more that the status of the basic hardware that makes up a network.

That change (or evolution) has been driven by the adoption of technologies such as cloud services, hybrid cloud deployments, mobility, content delivery networks (CDNs), hosted databases and so on. Those new technologies have changed the basic dynamic of how an application is delivered to an enduser and how the endpoint interacts with the data associated with the application.

A good example of that comes in the form of a line of business application delivered by a cloud service, where a hosted application server delivers the application to an endpoint via a browser connection and the associated data is stored on a hosted database and the connectivity to the application and data is provided via the internet over a VPN. In that situation there are multiple elements that have to work in concert to provide acceptable application availability and performance, and any one of those “tiers” can have an effect on the application. What’s more, any single tier can impact any other, especially when virtualization is involved or a software defined solution (SDN, SDDC, SDS, etc.) underpins operations.

Take the above example and apply it to the real world, where an IT manager gets a trouble ticket forwarded from the help desk that simply states “user is complaining of a slow application”.  For that IT manager, the big question becomes where to start. Using the traditional approach, the starting point becomes taking a look at the hardware and network. However, that approach is all but useless in today’s world.

Today, the IT Manager must turn to an APM platform to track down a problem and getting the proper intelligence out of that platform is a critical component for successfully remediating any application performance problem. That said, the typical APM platform is little more than a measurement and reporting tool, it will assist an IT manager in solving the problem, but that IT manager must have an understanding of how the tiers of a hybrid, cloud served network delivers an application. An understanding that brings us to how the OSI model can serve as a tier template for application delivery.

If you think about the seven layers of the OS model and what each is responsible for in the realm of network communications, it becomes clear how some of those layers can be mapped to the tiers of application delivery.

The OSI Model is broken out into layers, which has specific functions. Each of those functions map directly to the movement of information across a network.


If you align the basic concept with APM, it becomes clear how a symbiotic relationship is formed between application delivery and the constructs of the OSI model. Below is a Citrix based example.


When comparing the two models, it becomes evidently clear that the OSI model is intertwined with the best practices of APM troubleshooting.

The question here becomes one of “how well do IT Managers understand the implications of APM and how the understanding the OSI Model becomes a critical competency for success.

For more information on the best practices for APM, please take a look at a webinar I participated in for eG Innovations, which can be found at

The driverless economy: what our city streets might be like in 2020

Paradoxically, here in early 2016, we are witnessing the lowest U.S. gas prices in years, but we are also moving toward a transportation era based on dramatically different economic premises, most obviously driverless vehicles. So it seems a perfect time to dig into the deep economics of cars, their impacts on city life, and what we can anticipate coming down the pike with the rise of driverless vehicles and smarter ways of living in cities once we can depend on AI-augmented transport.

Perhaps there is nothing so pedestrian as parking, a global phenomenon that we generally take for granted along with many of the other externalized costs associated with car culture. The hard fact is that the typical car spends 95 percent of its working life parked. This means that little of the value of the car is actually realized. And, according to the AAA, the average cost of owning and maintaining a mid-sized car in the U.S. in 2014 was almost $9,000, of which $1,300 per year goes just to parking! Therefore we should not be surprised that parking is a $100B industry. This despite the fact that as much as 98 percent of car trips — at least in Los Angeles — start or end with free parking, according to the California Household Travel survey.

Parking consumes a great deal of time, and according to Daniel Shoup, 16 studies from 1927 to 2001 found that 30 percent of the cars in congested downtown traffic were cruising to find parking, on average. He also notes that more recently, in 2006 and 2007, 28 percent of the drivers stopped at traffic signals in Manhattan and 45 percent in Brooklyn were searching for curbside parking.The average American takes four car trips a day, and if you figure two are commuting based, that can still translate into a half hour or more of looking for a space.

We seldom think of how much of our cities are given over to cars, but in one study it was found that 14 percent of Los Angeles is devoted to parking. Barclay’s Capital reported that we could see a 60 percent decline in the number of cars on the road, but the impact on parking could be much greater.

Obviously, the emergence of driverless vehicles suggests a great deal about the future of cities, and the impact on parking may be considerable. First of all, it’s clear that the on-demand car services like Uber and Lyft (along with car manufacturers like GM and Ford) have plans to provide driverless transportation to replace ‘drivered’ cars. That means that cars will not be parked after you get to the office, movie theater, or even grocery store.

One study suggested that a single driverless car could replace up to 12 drivered cars. Instead of being parked at some destinations, the driverless car will simply move on to the next pick up. Another consideration is that driverless cars may be folded into municipal transport plans, like trains, buses, ferries, and bicycles, and not managed like taxis or on-demand cars services, at all.

Even for those cars that are privately owned — which is likely to be a much smaller number considering how cheap Uberish services might be once the driver is out of the picture — driverless cars may be much more efficiently parked than human-managed ones, requiring dramatically less parking.

Source: Curbed Los Angeles

The frontiers of the future will be the ruins of the unsustainable.

One thing that is bound to change is the way that municipalities require a great deal of parking allocated for new housing, based on historic patterns. That is likely to change very quickly, and will immediately lead to denser and lower cost housing, even before our cityscapes are dramatically remade.

There are other very significant economic impacts that will arise from driverless cars. It’s been estimated that accidents will fall 80 percent in next 20 years as driverless cars sideline human drivers who are demonstrably terrible at driving. As a direct consequence, car insurance will plummet, with a drop from coverage of $125 billion in covered losses in the U.S. today, down to as little as $50 billion in 2040. But this is hard to predict, since we have no prior data. It could be much, much lower.

Source: The University of Texas at Austin

In the simulation above, we can get a sense of the driverless future. The white rectangles represent cars that have been ‘scheduled’ to pass through an intersection,while yellows have yet to be scheduled. But once scheduled, the cars are coordinated in their passage, so that traffic signals are not necessary, and the flow rate of the cars is much, much faster, without the need to stop once scheduled.

The economics of frictionless traffic — without traffic lights, traffic jams, and built-in delays — is another large factor in the net savings from driverless transport. Living and working in the city of 2025 will feel totally different, and not just because there is no driver turning the wheel. It will be a totally foreign landscape, with little parking, no congestion, and much more space at street level dedicated to people, and with significantly fewer cars in view at any time. Driverless won’t mean carless, but cars will no longer dominate our cities as they do today. And I can’t wait.

Blockchains: The Next Generation of the Secure Web

Perhaps BITCOIN’s greatest gift to the web is not the disruptive nature of a digital currency, but the platform used to build that distributed, worldwide, decentralized crypto-currency. BITCOIN’s platform, often referred to as a blockchain, uses an innovative approach to keep transactions secure, validate ownership and guarantee provenance.

A blockchain consists of a distributed cryptographic ledger shared amongst all nodes participating in the network, where every successfully performed transaction is recorded and shared. In other words, blockchains are proving to be a fully auditable, incorruptible database that can deny any known hack or attack.

Although the importance of the blockchain is often lost amongst the discussion of digital currency, blockchains have the potential to disrupt how the internet itself works. Simply put, there is a lot more to blockchains than just crypto-currency and monetary applications. Truth be told, a blockchain is a decentralized ledger protocol (and/or platform) that can govern both financial and non-financial types of application states.

A blockchain can be used to power decentralized business logic, which is contained in a cryptographic “element” that has intrinsic value and can only be unlocked if certain conditions are met. The business logic executes on a blockchain platform (a decentralized cloud service) once an automatic process validates that the terms and conditions set forth by participating parties are met.

Those concepts can be used to fuel the creation of P2P (peer-to-peer) or decentralized network solutions that allow virtual communities to create secure, auditable and hack proof services and applications.

Ultimately, blockchains may reign supreme in distributed, shared environments that are used by both open and closed digital communities – a concept that has been well vetted by BITCOIN as its secure methodology to handle currency.

However, that leaves one question – how does one build a blockchain and create a community that can use it?

One answer comes in the form of an open source platform that goes by the moniker of Ethereum, which is touted as a platform for decentralized applications, but in reality has become a catalyst for building blockchain based solutions.  Etherium leverages the blockchain ideology by providing both the platform and the development tools to build blockchain based community solutions, which are decentralized in nature and highly resilient, while also being incorruptible.

However, the ideology of a crypto-currency still permeates the Ethereum platform, yet it does not have to have any monetary value. In Ethereum’s case, that crypto-currency is more appropriately referred to as a cryptofuel, which Ethereum has dubbed Ether. Ether is used to power transactions, pay for computational steps and democratize the distributed platform.

Without Ether, distributed applications could fall prey to infinite loops, excessive data consumption and many other problems that could effectively destroy a decentralized application and applications are the key component of a community.

However, to use Ether as a cryptofuel to power a creative process, one must embrace the Ethereum platform, which means there has to be much more to Ethereum than a blockchain and crytpofuel. To that end, Ethereum has created a development environment called ETH/DEV, which offers IDEs, tools and resources used to build decentralized applications. Those applications can be fueled by Ether and therein lies the most important element of the blockchain.

The blockchain itself keeps track of the fuel units (Ether), and transactions can be assigned a cost of ether units, or even Ether payments, making all interactions transaction based. Ether does not need to have a particular monetary value associated with it – Ether could be based upon reputation points earned, contributions to the community, or any other activity that adds or uses some type of value measurement.

For some community projects, content or code contributions may be the key element for earning Ether, which can then be used by the person earning the Ether to “purchase” other elements from the community or escalate content or reach out to new audiences. The blockchain comes into play be creating and maintaining the ledger of who has how much Ether and how that Ether was earned, spent or transferred. In short, the applicability of Ether is limitless.

The concept of using a crypto currency like Ether brings many possibilities to light – for example, digital contracts can be secured using Ether and then recorded in perpetuity via the blockchain. What’s more, Ether can be traded for services, software and other virtual elements, creating an economy based upon distributed applications.

One thing is certain, block chain technology is here to stay and organizations such as Ethereum are on the cusp of creating new decentralized solutions that eschew traditional borders and physical entities.


Hotels are getting over the millennials and returning to basics, at last

It seems like millennia since hotel chains have been trying to tailor new ‘experiences’ that line up with carefully researched millennial leanings. Now, after many attempts to create more social shared spaces, new aesthetics to counter the old-school tastes of Boomers and Gen Xers, and other supposed innovations, we are seeing some new takes that ditch the millennial psychobabble and which really try to get at what is emerging as travelers’ real desires.

Hyatt has launched a new Centric line of hotels, which feels like a serious departure from the adjective-laden attempts to get at the psyche of business and leisure travelers, and which instead just gets out of the way. A brand video refers to guests as “Modern Explorers” and “wish-listers.” The ‘lounge-centric’ design reminds me of the Ace Hotel in NYC, where guests and locals interact in a library-inspired setting.

“We call them Modern Explorers because these are travelers who are very curious, very independent, and very time crunched,” says Kristine Rose, VP of brands, Hyatt. “They have a wish list and they really want to make the most out of all of their experiences and reasons for traveling.”

These travelers want to be in the center of the urban experience, to interact with locals: local people, local food, local attractions. The restaurant is called‘the Local Bar and Restaurant’ and will feature local dishes served up for the ‘casual foodie’.

I can attest to the attractiveness of the Centric concept to non-millennials, since I am a late Boomer, and the practicality of ‘the essentials done right’ combined with a deeply local orientation could be the definition of a cure for the experience I have regularly when traveling, even in luxury hotels.

At the other end of the spectrum, Hilton is also working away at trimming out the inessential, however in the new Tru hotels, they are cutting out business-oriented amenities like desks, and targeting the budget conscious leisure traveler.

As the company says, “Tru by Hilton is a category disrupter. It’s built on a belief that being cost conscious and having a great stay don’t have to be mutually exclusive. Tru focuses on what matters most to guests, with a hotel that is more than just a place to sleep, it’s a true travel experience.”

Hilton is running Tru as 100% franchise operation, with systems designed from the bottom up to cut operational costs, and leading to a $75-$90/night price point. This an effort to appeal to people that might otherwise turn to Airbnb for accommodations, but who’d really rather a no-frills hotel, so long as quality reaches some minimum.

So what is the deep trend? Modern travelers want no fuss, easy in-and-out hotels that meet some promise of quality at a price — at various tiers — but that appeal to their desire to explore the hotel’s locale rather than remaining cooped up private rooms or stodgy same-old-same-old eateries. A return to simplicity: a night’s stay and off you go!

Three Challenges the Hardware Industry Faces in 2016

2016 is the year many thought leaders in tech space are urging caution, expecting markets to cool drastically and urging startups to stay afloat by minimizing burn rate. Yet at the same time, the hardware industry is the fastest growing sector in the market, with investment up 30x since 2010.

At this important precipice, what does the future hold for hardware companies?

To better understand where the hardware industry opportunities are, what are perceived as the greatest challenges, and what it means to be a hardware founder today, we surveyed over 200 hardware companies and uncovered a lot of interesting information. Here are the highlights.

Hardware Companies are Working to Build Products Faster

In our report, we found on average most companies budget one to three months to build a functional prototype. Similarly, the majority of companies budget just three to six months to go from functional prototype to production.

If you’re not familiar with hardware development lifecycles, just know that this kind of schedule is incredibly fast (and ambitious) compared with what was possible just five years ago. Hardware startups are increasingly seeking to become leaner in order to get to market faster and maximize capital investment.

But while companies are working hard to be lean and build faster, the outcomes don’t always match expectations. Data shows that about four out of five VC-backed crowdfunding projects were late in 2014, and of the late projects (total cohort of 91 companies), 30 percent still hadn’t shipped in Q1 2015.

Hardware companies setting ambitious schedules to get to market faster, and that’s fantastic and important, but there are clearly still obstacles in the way preventing companies from building as fast as they’d like to.

What are these obstacles and how can we overcome them? Well, there are many, and I won’t mention them all in this post, but one of the major ones we’re focusing on at Fictiv is prototyping speed. Iterating on a physical product is inherently slower than iterating on a digital product, but if we can help companies to iterate daily vs weekly, that’s a huge step forward.

Hardware Companies Seek Access to Better Tools

One of the key factors that has contributed to massive growth in the hardware sector is an increase in the number of tools available to hardware companies for prototyping and development.

We asked companies which tools they leverage in the development of their products and saw that 91% of companies use 3D printing, 58% use Breadboards, 51% use Arduino, and much more. (Honorable mention goes out to the tried-and-true duct tape, used by 46% of survey takers!)

On the design side of things, there are a large variety of CAD programs available, but according to our results, Solidworks still reigns supreme, used by 70% of our survey takers.

While there’s been a big uptick in the number of tools available, we need to continue to teach a wider audience how to use these tools most effectively. Arduino and Adafruit, for example, are doing a fantastic job educating people on the electronics side, Dragon Innovation is teaching young companies how to work with manufacturers in China, and on our blog we’re educating engineers and designs on how to prototype on the mechanical side of things.

However, access to tools is not enough to make a successful hardware company—we need to document and decodify the knowledge around how to best use these tools and manufacture products at scale.

Raising Capital is Top of Mind

We polled companies on the greatest challenge in bringing a successful product to market and 28% said funding & resources was #1.

And they’re not alone—this feeling is being echoed by thought leaders across the venture capital space. For example, Mark Suster, partner at Upfront Ventures, cautions: “I suspect 2016 will be the year that the more heated private tech markets cool.” Similarly, Fred Wilson, co-founder of Union Square Ventures, recently projected that “Markdown Mania will hit the venture capital sector as VC firms follow Fidelity’s lead and start aggressively taking down the valuations in their portfolios.”

In response to VC’s urging caution this year, minimizing burn rate and staying lean is the mantra for hardware startups in 2016.

The good news is that hardware is still the fastest growing sector in the industry and investment has been increasing at astounding rates: Investment in hardware is up 30x since 2010 and venture capital dollars in IoT have gone from $1.8 billion in 2013 to $2.9 billion in 2014 and $3.44 billion in 2015.

To stay lean, hardware companies should consider burn rate and optimize for speed in the prototyping stage of development. Often we see cost-conscious startups skimp on up-front costs rather than considering the cost of wasted time, which ultimately comes down to burn rate (people are your biggest expense). So every time you order a 3D printed part, for example, the true cost of that part is really (part cost + (lead time x daily burn rate)).

Main Takeaways

The evidence from our State of Hardware Report points toward incredible potential for the hardware industry. More and more companies are building innovative products, we have better tools and technologies for prototyping, and the community is strong and passionate about open-source knowledge.

But we still have a ways to go before hardware development can truly to accessible to everyone. We hope this snapshot of information points the community in the right direction to understand how to make hardware universally accessible so we can continue to build better tools and resources for truly democratized hardware development.