Recently I was on a panel discussion at the Greenwich CIO Executive Leadership Summit (CIO Summit) on how innovation and new technologies like AI and machine learning can benefit the enterprise. During the discussion one of the audience members asked a question about the Productivity Paradox and how we can reconcile it with spending more on technology innovation. This is a type a question that a former coworker of mine would lovingly refer to as a "Stump the Chump" question as it is an economics question and none of us on the panel were economists; instead we were management, marketing and technology types.
I really don't feel any of us really nailed a great response but I did some research on the topic and have come to the conclusion that as an enterprise question it is somewhat meaningless. But it is something that we should understand as it tends to come up from time to time in industry and economics articles. Since it does, some in senior management will always think that it is a reason, perhaps, to cut back or remove technology spending. It is not.
Before we get too far, what is the Productivity Paradox? In simple terms it is the idea that as technology spending has increased, from an economic perspective there has not been a similar increase in the productivity of the economy as a whole. The cynical interpretation is that our technology spending is wasted. This is now coming up in relation to new spending on artificial intelligence and machine learning.
Why is the Productivity Paradox happening? These are some of the common answers.
The technology isn't as impressive as we think it is - On the surface this is the most obvious answer, and as I said, the cynical one. We are spending all this money on technology and by extension AI and machine learning but these expenditures are not gaining us anything. Sure, could be.
We don't know how to use the technologies yet or productivity trails adoption - This isn't the first time we have heard of the Productivity Paradox. It had popped up in the 1980's as well. Then the late 1990's happened and economic productivity went through the roof. Not only did everyone have computers, but they were all connected through the internet and all kinds of new possibilities opened for us. This could be where we are at now, on the cusp of some large economic surge as other technologies come together with AI and ML to change the landscape. Similarly, it could be.
We are measuring the wrong things - The economic measure of productivity primarily deals with output of goods and services. Is economic productivity the only thing that makes or breaks a society. Probably not. Facebook, from a business perspective, is primarily an online ad revenue supported company. The economic measure of productivity would not measure in any way the benefit (or lack thereof) of social media. Similarly it doesn't measure a lot of subjective quality of life activities. It may measure the creation of a video game but not anything to do with the use of that game. Similarly if machine learning can make our shopping experiences subjectively "better" it would not measure that either. This could also be at play.
While these are all very interesting rationals, I don't think they have any day to day impact on your business. The real answer is, it just doesn't matter ... or at least it shouldn't matter. Why do I say this? Because the Productivity Paradox is dealing with a macroeconomic question of the best use of scarce resources. Individual companies probably do not measure their success on maximizing the productivity output of the world economy. Instead they likely do have a mission and that mission will be negatively impacted if they can't gain mind and/or market share.
For the sake of argument let's just say the emergence of companies like Uber and Lyft didn't change the size of the pay for ride market at all. Let's say their new business model enabled by new technology didn't increase overall productivity of the economy one bit. If you were a cab or other ride based transportation company, did you care?
My argument is they didn't laugh off Uber and Lyft saying, "look at those idiots spending all that money on technology, don't they know that isn't going to raise the productivity of the economy?". No instead they were pulling out their hair with the realization that Lyft and Uber were taking away large portions their market share. That is why the Productivity Paradox is, and will likely remain, an irrelevant question for the enterprise. Corporate missions and goals do not deal with overall economic productivity and as long as technology can deliver competitive advantage it will continue to be relevant for the enterprise.
Wednesday, December 12, 2018
Tuesday, August 14, 2018
A Confluence of Enabling Technologies
In the years leading up to the release of the iPhone, all the technologies that made it possible were already in existence. We had touchscreens, cell phones, GPS devices, MP3 players, smart(ish) phones and even attempts at tablets. Everything was there, but no one could figure out how to put it all together. We had separate devices for GPS, listening to music, browsing the internet, reading email and making calls. While we had all the technologies and even could use them in commercial products, no one had figured out how to best put them together in a new and innovative product that really made these technologies work.
Now we've got new technologies and devices like this one (Microsoft's Hololens):
And this one (Amazon Echo with Alexa):
This guy (Nest):
And even this on our phones:
There are many, many more examples.
It feels like we are in a place very similar to where we were prior to 2007 and the introduction of the iPhone. There are a lot of enabling technologies like machine learning, bots, augmented reality, virtual reality, IoT and other ways for increasingly intelligent devices to interact with and change the world around us.
We might think, "Sure I can do a lot of this stuff now on my Alexa and Android or iPhone and I just got an Occulus Rift." But like the world prior to 2007 it doesn't feel like the current batch of products really has this all right. Not in the way that the early PCs did for in home computers, or the way the Mac fundamentally changed how we interacted with them in 1984, or even how battery, display, processor and OS improvements led the way to the laptop. It wasn't the existence of the internet that brought us online, it was the standardization of HTML, the web browser and distribution technologies like DSL that put it all together.
I'm not saying that tomorrow there is going to be some new revolutionary device and we're all going to stop using our smart phones. But it does feel like there is increasing momentum for something new. All these new technologies are not being used to their fullest and the current standard device platforms for delivering these technologies seem clunky at best. The smart phone is no longer the revolutionary platform that it was, it hasn't been for a while. It may take several years for this to bubble over in a usable way but it's time to start preparing. If you haven't started to learn machine learning or about augmented reality, perhaps now is the time to start investigating how these technologies work.
Now we've got new technologies and devices like this one (Microsoft's Hololens):
And this one (Amazon Echo with Alexa):
This guy (Nest):
And even this on our phones:
There are many, many more examples.
It feels like we are in a place very similar to where we were prior to 2007 and the introduction of the iPhone. There are a lot of enabling technologies like machine learning, bots, augmented reality, virtual reality, IoT and other ways for increasingly intelligent devices to interact with and change the world around us.
We might think, "Sure I can do a lot of this stuff now on my Alexa and Android or iPhone and I just got an Occulus Rift." But like the world prior to 2007 it doesn't feel like the current batch of products really has this all right. Not in the way that the early PCs did for in home computers, or the way the Mac fundamentally changed how we interacted with them in 1984, or even how battery, display, processor and OS improvements led the way to the laptop. It wasn't the existence of the internet that brought us online, it was the standardization of HTML, the web browser and distribution technologies like DSL that put it all together.
I'm not saying that tomorrow there is going to be some new revolutionary device and we're all going to stop using our smart phones. But it does feel like there is increasing momentum for something new. All these new technologies are not being used to their fullest and the current standard device platforms for delivering these technologies seem clunky at best. The smart phone is no longer the revolutionary platform that it was, it hasn't been for a while. It may take several years for this to bubble over in a usable way but it's time to start preparing. If you haven't started to learn machine learning or about augmented reality, perhaps now is the time to start investigating how these technologies work.
Wednesday, April 4, 2018
Even For Small Teams, AppCenter Helps Press the Easy Button
Opening up Xcode, Android Studio, Visual Studio, Atom or
whatever integrated development environment we use and starting coding away is
always easy enough. Sure, we can create an application, go through
the process of manually signing it and get an app in the store. This is OK if we
are working alone but at some point, many of us are going to be in a situation
of working on a team or have a complex enough application where more formal
testing and deployment is required. Then we start putting together disparate tools to do mobile
DevOps, deploy test versions of the app, handle crash reporting, analytics,
etc. Getting all this to work together can be difficult, particularly the
build servers for automated builds which can be time consuming to
setup and maintain for groups that don't have a lot of experience in that space.
This is the space where Microsoft's App Center really
shines. For small teams, who may be new to mobile, they need to know very little
about setting up these processes to use App Center. One of the strengths of this is that it falls under
Microsoft's new philosophy of embracing technologies outside of their stack, so it isn't just for .Net folks. If
you work in Xcode on Swift, no problem, Java in Android, no problem, React
Native, no problem. App Center takes a variety of infrastructure needs common
to most mobile apps and presents it as a all in one solution where all the parts
are easy to use, work well together or can just be taken a-la-carte.
To see how many different capabilities Microsoft has provided
on a variety of platforms consider the following:
Feature
|
Xamarin iOS and Android
|
Native Android
|
Native iOS
|
UWP
|
React Native
iOS and Android
|
Cordova
iOS and Android
|
MacOS
|
Build
|
|||||||
Test
|
|||||||
Distribute
|
|||||||
Crash Reporting
|
|||||||
Analytics
|
|||||||
Push Notifications
|
|||||||
Code Push
|
Microsoft has gone to great lengths to make most of the features work with many different mobile development platforms. I don't have space to get into how these features work here, but if you want more information you can watch my AppCenter class on Lynda.com or Linked-In Leaning:
Here's how I think about the good, bad and ugly parts of App Center
The Good
- We get continuous integration and continuous deployment setup quickly without needing to know a lot about how to do it
- There is a lot of extendability and capability to tie in other process tools with API and REST interfaces
- It has reasonable pricing for the features that are not outright free
- If you are you blue about managing all the devices in your Ad-Hoc provisioning profiles, AppCenter can make this easy
- Do you only have a few devices to test on? AppCenter has thousands
- This even offers an entire framework for live code pushing React Native and Cordova Javascript code
- Crash reporting and analytics can be set up easily on all supported development platforms
- There just is a lot in this box to like. With AppCenter and a public git style repository of the right type there really is no reason not to have at least a basic CI process
The Bad
- There are limitations around customization of build process (there are three places where custom scripts can be inserted)
- No tie in for builds to kick off custom automated testing (must use scripts and API/REST interface)
- Appium tests can only be written in Java
- The testing feature does not allow for manual testing, there is no direct interaction with the devices for exploratory testing
- There are some other limitations
- Android React Native build only looks two directory levels deep in the repository for workspaces
- Deployment feature still not as full featured as Hockey App
- Features like iOS store deployment that require access to the Apple store account need two factor authentication to be turned off
The Ugly
- To use the build services, it only has Git integration, if you use another source control tool, too bad. You also better be on VSTS, BitBucket or GitHub
- On that note, repositories cannot be on-premise either
- Builds cannot easily be tied into a gated check-in concept
- The REST / API to create builds is fire and forget so it is hard to kick off an AppCenter build from another tool and know when the build is done to know if it failed.
Subscribe to:
Posts (Atom)