Looks like Google might be ticking off a box on its wearables shopping list, or someone else might be. Basis Science, the company behind the Basis Health Tracker Watch, is on the market, according to two people familiar with the matter.
We’ve heard that the company has been shopping itself around over the past few weeks and has spoken to Google, Apple and possibly Samsung and Microsoft about a potential sale.
The price we’ve heard for any possible activity is “sub-hundred million,” which could mean a small return for investors like Norwest Venture Partners, Mayfield Fund and Intel Capital, who have poured over $30 million into the company.
The alternative to an acquisition for Basis would be a long-sought-after C round of funding, say those people.
Though its market share is unclear, the company would be an interesting buy for any of the big four mentioned above. Google, which recently scooped up “Internet of Things” darling Nest, is gunning to be the frontrunner in both the AI and hardware spaces. It is also said to be working on its own wristwatch as an entry into the consumer hardware market.
Apple, too, is rumored to be keeping an iWatch product waiting in the wings, while Samsung’s smartwatch, the Galaxy Gear, has already hit the shelves, leaving much to be desired from a design perspective.
Microsoft would be the dark horse in this race, with only the Kinect to boast of from a wearables and hardware standpoint.
Of all the fitness trackers available currently, Basis is said to be the most accurate in its data collection and reporting, but the most clunky with regards to design. A generous parent company could give it the chance to experiment with a more-streamlined, Jawbone or Fitbit-esque product without it having to go through the pains of raising another round of capital to support R&D.
But it’s worth remembering that a splashy launch doesn’t always bring long-term success. In fact, some of tech’s strongest startups, apps and organizations started out pretty quietly. Take Y Combinator. YC Demo Day is now one of the buzziest events in Silicon Valley — but in a really fascinating onstage interview with Derek Andersen at the Startup Grind 2014 conference earlier this month, YC co-founder Jessica Livingston discussed the more humble early days of the seed accelerator when it first launched in 2005.
Livingston said that at the first YC Demo Day back in Cambridge, Massachusetts, eight startups launched to a room with only “15 to 20 investors” in attendance. “Some of them were legit, and some of them were just rich people we knew and said, ‘Can you please come?’” she said. “But we knew we were on to something.”
She went on to explain how they knew they were on to something, and the key that kept Y Combinator growing from those early days (this bit starts at around 9:45 in the video embedded below this post):
“It wasn’t that hard because some people cared. And that’s an important thing to remember… the eight people we funded cared, you know, and some of the investors who funded them cared, and it was slowly growing.
Paul Buchheit, who’s one of the YC partners who invented Gmail, gives this great advice which is, ‘It’s so much better to make a few people love you, than a lot of people just like you.’ And that is just so true, with whatever you’re doing. And that’s what it was for us. A few people loved us, and we just sort of just grew from there. So even though reporters didn’t love us, they wouldn’t return my phone calls and could care less that we were doing this new way of funding, we just moved on. We didn’t need them.”
Livingston’s comments are a great reminder that while launch day can be an important milestone, and getting early press can be valuable, it’s not everything.
You can watch the entire interview with Jessica Livingston and Derek Andersen in the video embedded below — it was a great conversation. The comments excerpted above start at around 9:45.
Featured image of YC co-founder Trevor Blackwell via YCombinator
There can be little doubt that, just like Microsoft thinks touch is the future of computing, Google seems to believe voice will be the user interface of the future. Indeed, when I was in Mountain View earlier this month, a Google spokesperson challenged me to just use voice whenever possible on my phone.
For Google, all things voice now start with “Ok Google” or “Ok Glass.” With Android KitKat on flagship phones like the Moto X and Nexus 5, voice recognition isn’t just something you have to start with a click. It’s always listening to you and is waiting for you to talk to it.
I also went to see a screening of Google And The World Brain over the weekend, a 2013 documentary about Google’s controversial book-scanning project. The only person Google made available for the film was Amit Singhal, a Google VP and the head of its core ranking team. In the movie, Singhal doesn’t actually mention Google Books, but instead he talks about how the Star Trek computer was a major influence on his research. That, plus Google’s challenge to use voice commands whenever possible, made me think a bit more seriously about all of the work Google (and arguably Apple and others) have recently been doing around voice recognition and natural language processing.
In the early days of voice recognition and Apple’s Siri, talking to your phone or computer always felt weird. There’s just something off about talking to an inanimate object that barely understands what you want anyway. The early voice recognition tools were also so limited, it took Zen-like focus on your pronunciation and ensuring that you more or less stuck with the approved commands to get them to work. Just ask anybody who has voice recognition in their cars how much they enjoy it (just don’t ask anybody with an older Ford SYNC system, they may just throw a fit).
Solving those kinds of hard problems is what tends to motivate Google, though. As I noted a few months ago, one of Google’s missions is to build the ultimate personal assistant, and to do that, it has to perfect voice recognition and – more crucially – the natural language processing algorithms behind it.
What Google’s voice commands enable you to do on the phone (and in the Chrome browser) today, is pretty impressive. Ask it “Call Mum” and it will do that. It’ll open web pages for you, answer complex questions thanks to Google’s massive repository of data in the Knowledge Graph, set up appointment and reminders, convert currencies, translate words and phrases, and send emails and texts.
For voice searches, it’ll just speak back the answers. That’s something few companies can replicate, simply because they can’t match Google’s Knowledge Graph. More interestingly, though, many of these actions draw you into a short conversation with your phone.
“Call Alex.” “Which Alex?” “Alex Wilhelm.” “Mobile or home?” “Mobile.” “Calling Alex.”
The fact that this works, and that Google can even often recognize pronouns in extended conversations, is awesome. For now, though, this still feels weird to me. I’m not likely to use it in public anytime soon, and using it when I’m alone in my office feels even stranger.
I’m guessing that kind of hesitance will wear off over time, just like video conferencing felt weird at the beginning and now we’re all used to video chats on Skype, FaceTime and Google Hangouts.
Maybe the computer from “Her” is indeed the future of user interfaces and the natural language processing and all of the other tech that drives Google’s voice commands and search today will surely form the kernel of the artificial intelligence systems the company will one day build.
It’s no surprise Google bought the stealthy artificial intelligence startup DeepMind a few weeks ago. Its founders have long been interested in AI and Larry Page is rumored to have led the acquisition of DeepMind himself. Back in 2000, Page once said that he believed “artificial intelligence would be the ultimate version of Google.” The company continues to go down this path, and many of the researchers at Google’s semi-secret X-labs seem to have a strong interest in AI, too. The ideal user interface for working with these systems is probably speech.
My experiment with only using voice control for a day pretty much failed, however — not because it didn’t work well, but just because I just don’t feel like talking to my phone most of the time.
On this week’s Droidcast, me and Chris Velazco get tough on smartwatches, but first we discuss Nokia Android “Nokia X” device plans and other infertile hybrid animals, and HTC’s renewed commitment to customer care and how that might affect its fortunes. Finally, we talk a bit about Chromecast, Google’s mobile-to-big screen media streamer and its new SDK.
Long story short, we know a lot about Nokia’s unreleased Android phone except for why it exists; HTC made some promises to customers in an AMA recently; and Google has made the Cast SDK part of its most recent stable release of Google Play services, so we should see a slew of apps offering up support for that home theater companion.
We invite you to enjoy weekly Android podcasts every Sunday at 4:00 p.m. Eastern and 1:00 p.m. Pacific, in addition to our weekly Gadgets podcast at 3 p.m. Eastern and noon Pacific on Fridays. Subscribe to the TechCrunch Droidcast in iTunes, too, if that’s your fancy.
Intro music by Kris Keyser
Direct download available here.
Stack Overflow, the coding-focused Q&A site that’s proven to be an essential tool for professional and amateur programmers alike, had an approximately hour-long outage Sunday morning that affected a number of users.
According to Stack Overflow’s parent company Stack Exchange, the cause was a DDoS attack against its network provider. The issue has been “partially mitigated” and the site is back up and running now, Stack Exchange says.
Reports of Stack Overflow’s outage started to hit Twitter and Hacker News at around 11am Pacific Time Sunday, and continued for about an hour. The panicked (and often humorous) notes of programmers who were unable to access the site during their planned Sunday coding sessions show just how valuable the service is for so many people:
Well, stackoverflow is down. Might as well pack it in and take the day off.—
(@pickett) February 16, 2014
Stack Overflow being down reminds me how badly I need Stack Overflow in my life.—
Adam (@adamjstevenson) February 16, 2014
stackoverflow is down. my career is on hold.—
John Rodley (@rodley) February 16, 2014
Came to work on a Sunday and Stack Overflow is down EVERYBODY PANIC—
Vineet Shah (@vineetshah) February 16, 2014
Now that the site is back up and running, though, anyone who was secretly relieved at the prospect of getting a “snow day” away from coding will probably have to find another excuse.
We’ve reached out to Stack Exchange for more details on the DDoS attack and the resulting outage and will update with any information we receive.
A year ago, I wrote a post titled “Silicon Valley Slowly Awakens To Android.” Recently, I purchased a Nexus 5 as we develop and begin the early tests of Swell for Android, and I wanted to share some of my initial user experiences carrying phones on both mobile platforms. What I want to focus on in this post are the elements of the Android experience I enjoyed and the elements of the iOS experience that I missed — what I don’t want to focus on is the “Android is better” or “Android sucks” debate. Now, with that disclaimer out of the way…The last time I really spent time on Android was in the Spring of 2011. That was a frustrating experience for me. Now with a brand new Nexus, it’s a new world.
Here’s what I like about having a Nexus 5 so far: The larger screen is enjoyable for reading Pocket and watching YouTube videos. Notifications are easier to digest. The integration of Google Services makes things significantly easier. I found it easier to multitask and switch apps on Android. Having Google Now just up and running is obviously nice. I have SwiftKey but haven’t fiddled enough with it yet. My personal favorites so far are products which can only be built on Android: Cover and Aviate. Cover, as many of you already know, is a lockscreen app which leverages sensor data from the handset and predicts which apps users may want at specific times. It’s surprisingly good at presenting me with the app I want to use at a given time. One of the great attributes of Cover is it reduces the time to get into an app and the cognitive load of sorting through apps. While our phones are cluttered with apps we rarely use, Cover intelligently elevates the apps we engage with most-often. As Cover spreads, it will reward apps with organic daily active engagement. Aviate is similarly elegant, a new homescreen interface with tons of cool options. (I’m also excited to try Ingress, Agent, Cogi, and any other apps you could recommend.)
Now, here’s what I missed not using iPhone all the time: The slightly-smaller form factor for typing. The retina screen, of course. The responsiveness of the touchscreen glass. There are many apps (especially from startups) that just won’t be on Android for a while, as it’s more efficient for small companies to build new products and experiences going iOS-first. I also like that there’s no “back button” on iOS — that was a confusing element for me on Android, as I don’t think of going back to a previous screen on mobile (seems more like a browser), though I can see how some may like this.
I’ve been carrying two phones for the last few weeks, largely for work but I’m enjoying experimenting with the new device and operating system. Recently, I started to think — what would it take for me or other iPhone users actually switch, to actually give away or sell my iPhone and just carry around this Nexus 5. Here’s what I came up with: Some will bolt for Android out of curiosity for something new, some will prefer cheaper and/or more flexible data plans, some will find all the apps they need on Android, some will want a bigger screen, or the ease of Google’s integrated services, or and so on.
However, what will get people moving en masse? That’s a trickier question to answer, and it’s also not clear that’s in Google’s best interest.
As killer apps like Google Now improve, these type of native anticipatory services may be enough to bring iOS users into Android. Or, since Android provides developers with more root access and data collection capabilities, app makers may create an entirely new mobile experience that’s both not possible on iOS and also vital to users. (That said, with hardware advancements like M7 and TouchID in iOS, the same could be said of Apple’s mobile platform — and, therefore, what we’re more likely to see is increasing divergence in the type of mobile experiences between Android and iOS.) Now, assume Google Glass becomes a consumer-level success – that entire phone-to-glass experience could end up being better powered by an Android, though Google can continue to write great iOS software and expand their reach across platforms, even if the functionality is limited or not as well-integrated within iOS. On Twitter last night, @robustus suggested Android’s killer app opportunity may be Bitcoin wallets after Apple’s moves to block some Bitcoin apps, though wallets could be open to more attacks. It’s a provocative thought, no doubt, and one that we shouldn’t dismiss. Or, maybe this isn’t about one platform versus another, but more about two platforms peacefully coexisting and preserving choice and competition for the benefit of consumers. Let’s hope that’s the case.