Google’s Just a Line AR app draws Android and iOS users together

Just a Line

Back in March, Google released an app called “Just a Line” as part of its slew of new ARCore 1.0 apps. On Wednesday, May 30, the company not only launched a version for the iPhone but introduced a new collaborative feature for both Android and iOS.

If you already couldn’t tell by its name, Just a Line is super simple and also fun. The app essentially lets you bring drawings to life. Using the camera, simply point at the area where you’d like to draw and press your finger on your smartphone screen to begin doodling. After you’re done, you can then walk around to see it from various angles.

The app is extremely reminiscent of Google’s Tilt Brush virtual reality app, which allows users to paint in a 3D space. But the app was only available on expensive VR devices, which makes Just a Line far more accessible.

Since the app was initially launched in conjunction with Google’s ARCore, it’s only been available for compatible Android devices. But iPhone users can now get in on the fun as well, by downloading the iOS version via the App Store. For both iOS and Android, users will also get to try a new feature that lets you simultaneously create together in augmented reality across both platforms.

To start, place both phones side-by-side and tap the partner icon. Once both phones are connected, you and your partner are able to see and add to the same drawing in real-time.

The collaborative experienced is powered specifically by “Cloud Anchors” which was introduced at this year’s annual Google I/O developer conference. It allows developers to create these types of AR experiences through the cloud regardless of whether you’re on Android or iOS.

At the conference, Google made a demo app called Lightboard where two people are able to shoot paintballs at each other. The point of the game is to cover an area with as much color as possible. We were able to play the game successfully between an Android phone and iPhone in seconds.

Even though Google only recently launched its new framework for augmented reality apps on Android, there are already tons of ARCore-based apps available on the Google Play Store. Ranging from Pottery Barn 3D Room View where you can virtually try furniture in your home, to My Tamagotchi Forever where you get to raise your own Tamagotchi, ARCore apps are available for a variety of categories.

Identifying Top Vulnerabilities in Networks: Old Vulnerabilities, IoT Botnets, Wireless Connection Exploits

by Tony Yang, Adam Huang, and Louis Tsai

We have noted time and again how compromising networks and connected devices is rooted in finding weak points in the system. Often, these are in the form of vulnerabilities. Worse, vulnerabilities that aren’t even new. In the context of the internet of things (IoT) and noteworthy security incidents related to it, these vulnerabilities have afforded attackers means to use unsecure devices to facilitate malicious activities such as distributed denial-of-service (DDoS) attacks.

Using our IoT Smart Checker, a tool that scans networks for potential security risks, we looked into home and other small network environments and the vulnerabilities that connected devices usually encounter. Our findings homed in on known vulnerabilities, IoT botnets with top vulnerability detections, and devices that are affected.

From April 1 to May 15, we observed that 30 percent of home networks had at least one vulnerability detection. A detection would mean that we found at least one connected device being accessed through a vulnerability in the network. Our scanning covered different operating systems (OSs), including Linux, Mac, Windows, Android, iOS, and other software development kit (SDK) platforms.

Known vulnerabilities affecting IoT and other connected devices

What’s particularly interesting in our findings is that the top detections were not the usually expected weaknesses in the home network. While we still saw a number of default password logins attributed to default credentials like those used with the Mirai and Brickerbot malware, the recent top detected vulnerabilities (as seen in Figure 1) were actually those that had been known over the past few years.

Figure 1. Top 10 vulnerabilities in connected devices

Figure 1. Top 10 vulnerabilities in connected devices

Being the gateways to internet-connected devices in networks, routers were unsurprisingly the devices on which most of the vulnerabilities were found. The highly publicized Poodle vulnerability in Secure Sockets Layer (SSL) and early Transport Layer Security (TLS), for example, was found to mostly affect routers as well as printers; attackers who successfully exploit the vulnerability can decrypt any encrypted traffic that they are able to capture. Drown, another well-known vulnerability, was also found to primarily affect routers; it affects Hypertext Transfer Protocol Secure (HTTPS) and any server or client that allows SSLv2 and TLS connections.

The vulnerability exploited by the WannaCry ransomware remains pervasive, as it also makes an appearance in our top detections. Other noteworthy vulnerabilities in our top detections include the SambaCry Linux vulnerability, the OpenSSL Heartbleed bug, the remote code execution CVE-2014-9583 router vulnerability, and the remote code execution CVE-2017-6361 Network Attached Storage (NAS) vulnerability.

Figure 2. Top affected ports

Unless network administrators disable unnecessary ports or at least identify which ports are open to manage security better, open ports on devices can very well result in networks’ running the risk of being attacked. When we looked at the affected ports in our scanning, we found that port 443 significantly eclipsed the other top ports on the list. Port 443 is the standard Transmission Control Protocol (TCP) port used for HTTPS websites using SSL. This checks out as the Poodle and Drown vulnerabilities both involve weaknesses in SSL or its successor, TLS. Another top affected port is Server Message Block (SMB) port 445, which is used by the EternalBlue exploit that gave way to the infamous WannaCry outbreak in 2017.

Vulnerabilities taken advantage of by IoT botnets

Vulnerabilities related to IoT botnets also emerged among our top detections. Two vulnerabilities in our top 10 detections, for example, are ones that are taken advantage of by the Reaper botnet. Reaper uses a combination of nine attacks that target known IoT vulnerabilities. Routers, Internet Protocol (IP) surveillance cameras, and NAS devices were found to be particularly susceptible to Reaper.

Satori, considered to be the successor of the Mirai botnet, is also represented at the top of our vulnerability detections with remote code execution CVE-2014-8361. As with Mirai, Satori’s source code was released publicly and can be used by any attacker, which could explain its appearance on the list. Satori propagates itself by scanning vulnerable devices and then compromising them.

Android and iOS mobile devices vulnerable to BlueBorne and KRACK

“Airborne” threats like BlueBorne and KRACK are capable of compromising devices over the air, provided that attackers are within range. BlueBorne, for example, enables an attacker to sniff, intercept, or redirect traffic between Bluetooth-enabled devices to gain access to data. The KRACK (Key Reinstallation AttaCK) exploit, on the other hand, takes advantage of several security flaws in the Wi-Fi Protected Access 2 (WPA2) protocol, making it possible for attackers to eavesdrop on users’ data.

Figure 3. 58 percent of Android devices found to be vulnerable to BlueBorne and KRACK

Figure 3. 58 percent of Android devices found to be vulnerable to BlueBorne and KRACK

In this case, Android and iOS devices having Bluetooth and Wi-Fi capabilities were found at risk of these two threats. Seemingly living up to its reputation of being less secure than iOS, Android was found to have 58 percent of its devices vulnerable to BlueBorne and KRACK. The iOS platform isn’t exempt, though, with 12 percent of Apple smartphones found to be vulnerable. Patches had already been issued to users of iOS, which could account for the platform’s relatively low numbers.

Figure 4. 12 percent of iOS devices found to be vulnerable to BlueBorne and KRACK

Figure 4. 12 percent of iOS devices found to be vulnerable to BlueBorne and KRACK

Securing connected devices against vulnerabilities and exploits

Attacks exploiting the aforementioned vulnerabilities can easily be avoided by applying patches made available by device manufacturers. However, not all manufacturers provide fixes for the vulnerabilities, and not all users are in the habit of patching routers, not to mention the devices connected to them.

Users should secure the way they set up their networks. Enabling password protection on routers and connected devices and replacing factory default passwords with strong, hard-to-guess ones is a step in the right direction. For ensured protection, the Trend Micro™ Home Network Security solution can check internet traffic between the router and all connected devices. Our IoT Smart Checker tool has been integrated into the Home Network Security solution and HouseCall™ for Home Networks scanner. Enterprises can also monitor all ports and network protocols for advanced threats and be protected from targeted attacks with the Trend Micro™ Deep Discovery™ Inspector network appliance.

Users of the Trend Micro Home Network Security solution are protected from particular vulnerabilities via these rules:

  • 1058981 WEB Directory Traversal -21
  • 1059406 SSL OpenSSL TLS DTLS Heartbeat Information Disclosure -1 (CVE-2014-0160, Heartbleed)
  • 1059407 SSL OpenSSL TLS DTLS Heartbeat Information Disclosure -2 (CVE-2014-0160, Heartbleed)
  • 1130118 SSL OpenSSL SSLv3 POODLE Padding Brute Force (CVE-2014-3566)
  • 1130327 EXPLOIT ASUSWRT 3.0.0.4.376_1071 LAN Backdoor Command Execution (CVE-2014-9583)
  • 1133637 SMB Microsoft MS17-010 SMB Remote Code Execution -3
  • 1133638 SMB Microsoft MS17-010 SMB Remote Code Execution -4
  • 1134286 WEB Realtek SDK Miniigd UPnP SOAP Command Execution (CVE-2014-8361)

The post Identifying Top Vulnerabilities in Networks: Old Vulnerabilities, IoT Botnets, Wireless Connection Exploits appeared first on .

Microsoft expands data privacy tools ahead of GDPR

Microsoft has announced it will extend its data privacy tools to individual users of the company’s products and services worldwide. Microsoft’s Data Subject Rights include the right to know about the data the company collects on you and the ability to correct the data, delete it, or move the data elsewhere. Microsoft’s privacy dashboard will help its customers manage their data. The announcement mirrors action taken by other technology companies in preparation for the General Data Protection Regulation (GDPR), which goes into effect on May 25th. The GDPR sets new rules for how companies manage and share personal data.

In a blog post that calls privacy “a fundamental human right,” Microsoft says it’s had more than 1,600 of its engineers working on GDPR projects. It has also made investments to redesign its tools and systems to comply with the new rules. Microsoft notes that it will use customer feedback to improve the privacy tools.

“We believe privacy is a fundamental human right. As people live more of their lives online and depend more on technology to operate their businesses, engage with friends and family, pursue opportunities, and manage their health and finances, the protection of this right is becoming more important than ever.”

“Privacy is also the foundation for trust. We know that people will only use technology that they trust. Ultimately, trust is created when people are confident that their personal data is safe and they have a clear understanding of how and why it is used. This means companies like ours have a huge responsibility to safeguard the privacy of the personal data we collect and the data we manage for our commercial customers.”

Microsoft has also updated its privacy statement that governs consumers, making it easier to read and inclusive of specific information related to GDPR. Additions to the statement include highlighting new categories surrounding personal data the company collects like voice data, content consumption data, and browsing history. The updated privacy statement also clarifies how Microsoft uses personal data generally and describes how customers can access and control their data.

Previously, Facebook has said it will roll out privacy tools globally, while Apple has introduced a feature that allows anyone to download all the information the company has on them. The EU has previously expressed concern over Microsoft’s privacy settings in Windows 10. Microsoft released a new data collection viewer tool as part of the recent Windows 10 April 2018 Update to help dispel those worries.

Get Ready for AI-Enabled Advertisements — From Your Fridge

In this video, Entrepreneur Network partner Neil Patel sits down with Viewership.com’s Adam LoDolce to address some of the questions of his social media followers.

When one Facebook fan asks about the future of AI as a competitor to SEO and digital marketing, Patel explains how AI will be integrated into digital marketing.

For Patel, in the future, smart appliances will most likely be able to anticipate if you’re low on a particular item, what products you may also been interested in, and the price difference between brands. All these capabilities equate to reading your mind. As technology gets more sophisticated, the ease of ordering items from your couch may become second nature. 

So prepare yourself, a new publisher of advertisement content may be waiting for you soon — your AI-integrated fridge. 

Click the video to hear more of Patel’s thoughts on AI and digital advertising. 

Related: I’ve Raised Over $20 Million for My Businesses. Here’s How to Get the Attention of Venture Capitalists.

Entrepreneur Network is a premium video network providing entertainment, education and inspiration from successful entrepreneurs and thought leaders. We provide expertise and opportunities to accelerate brand growth and effectively monetize video and audio content distributed across all digital platforms for the business genre.

EN is partnered with hundreds of top YouTube channels in the business vertical. Watch video from our network partners on demand on RokuApple TV and the Entrepreneur App available on iOS and Android devices.

Click here to become a part of this growing video network.

Related:
Get Ready for AI-Enabled Advertisements — From Your Fridge
Here’s How This Networking Platform is Helping Entrepreneurs Scale
5 Emerging Technologies for Rapid Digital Transformation

Microsoft acquires conversational A.I. technology firm Semantic

Alexa and Cortana?

Microsoft is betting big on artificial intelligence. In a blog post published Sunday, May 20, the Redmond, Washington-based technology giant announced the acquisition of Semantic Machines, a company focused on building conversational A.I. “Their work uses the power of machine learning to enable users to discover, access, and interact with information and services in a much more natural way, and with significantly less effort,” Microsoft notes. The move could help give Cortana the leg up it needs on competitors like Amazon Alexa and Google Assistant.

“AI researchers have made great strides in recent years, but we are still at the beginning of teaching computers to understand the full context of human communication,” wrote David Ku, CVP and chief technology officer of Microsoft A.I. and Research. “Most of today’s bots and intelligent assistants respond to simple commands and queries, such as giving a weather report, playing a song or sharing a reminder, but aren’t able to understand meaning or carry on conversations.” But conversational A.I. could turn this norm on its head, and Semantic could be at the forefront of this change.

Semantic has previously worked with major tech firms, leading automatic speech recognition development for Apple’s Siri. In essence, Semantic employs machine learning in order to provide context to chatbot conversations, making dialogue seem a bit more natural and better-flowing.

“With the acquisition of Semantic Machines, we will establish a conversational AI center of excellence in Berkeley to push forward the boundaries of what is possible in language interfaces,” wrote Ku. “Combining Semantic Machines’ technology with Microsoft’s own A.I. advances, we aim to deliver powerful, natural and more productive user experiences that will take conversational computing to a new level. We’re excited to bring the Semantic Machines team and their technology to Microsoft.”

Thus far, no financial details of the acquisition have been disclosed.

Microsoft is by no means the only company trying to make strides when it comes to artificial intelligence and its smart assistants. Amazon, for example, is trying to give Alexa a better memory, while Google is making bots so human-esque that they’re practically indistinguishable from humans during phone conversations with its new Duplex offering. We’ll just have to see how Microsoft keeps up.

Are Passwords Leaving Your Network Unlocked?

Passwords today are like spare keys that have been given out to too many friends and neighbors. Common password content like birthdays, favorite sports teams, or names of children or pets are easily found on social media accounts. There are precautions you can take to strengthen your passwords, like using longer passwords, uncommon phrases, or using password generators. Ultimately, even with these precautions, hackers have the tools to get around many of your password tactics. Your accounts need an extra layer of protection so that your business network does not fall into the wrong hands. This is where multi-factor authentication comes in.

Multi-factor authentication requires two or more pieces of evidence before a user is granted access to a protected account or device. We see this in practice all the time. For instance, at the ATM, the user must present both the correct debit card, and enter the right PIN. This is an example of two-factor authentication at work. The system works by requiring information the end user possesses, like a PIN or a password, with an object the intended user should have on hand.

When your sensitive business accounts and databases are secured only by passwords, you are putting yourself at risk, even when the passwords are complex and changed frequently. You need backup protection in case passwords are compromised.

There are many products that help you add this layer of protection to your network. Trying to sift through, select, and implement these products can be time consuming and confusing. At the same time, you know that an upfront investment of time and resources into cybersecurity can make all the difference when hackers strike. Multi-factor authentication creates more work for the hackers, and more opportunities for your defenses to protect your network. If someone is trying to invade an employee’s email account, guesses the password, but then needs to enter a randomly generated code that went to your employee’s phone or workplace computer, they may move on to invade a less protected network. Unlike a PIN, which suffers many of the same pitfalls of a password, these codes may only work for a limited period of time, and have no personal connection to the user, meaning they cannot be guessed. This type of system gives you greater confidence that the people accessing your network are the ones meant to be there.

Setting up multi-factor authentication can have some unforeseen complexities, and is best handled by IT professionals who have the expertise and skillset to implement this useful protection for your business.

FUSE3 Communications has the experience and cybersecurity expertise you need to ensure your network is as secure as possible. Contact us today to find out your next steps.

Apple makes coding curriculum accessible for deaf and blind students

Today, Apple announced that it is teaming up with educators to bring Everyone Can Code to schools for people who are deaf, blind and have other assistive needs. This fall, teachers at certain schools that serve students with disabilities will begin incorporating Everyone Can Code into their classroom curriculums.

“Apple’s mission is to make products as accessible as possible,” Apple CEO Tim Cook said in a statement. “We created Everyone Can Code because we believe all students deserve an opportunity to learn the language of technology. We hope to bring Everyone Can Code to even more schools around the world serving students with disabilities.”

Through Everyone Can Code, Apple hopes to encourage kids to learn to code in an easy and appealing way, laying the groundwork for future STEM careers. The program is compatible with VoiceOver, which is sophisticated screen-reading technology for people with vision impairments. Using this gesture-based tech, people can learn to code without having to actually see the screen.

For students who have hearing disabilities, FaceTime can capture expressions and gestures so that they can fully interact with the program. Additionally features like Type to Siri and devices like Made for iPhone hearing aids can also help deaf students use Everyone Can Code.

The rollout of Everyone Can Code for students who are deaf and blind will start at eight schools. Presumably, Apple will work with these educators to refine the platform before making it available more widely. It’s a great step for accessibility.

California remains a hotbed for self-driving car tests

California has long been a popular choice for autonomous vehicle testing, due to a favorable regulatory environment, challenging roads, and the presence of a wealth of talented developers and engineers. This was further illustrated earlier this week by a pair of announcements from two major players in the space.

BII

Didi Chuxing was granted permission to test its autonomous cars on public roads in California by the state’s Department of Motor Vehicles (DMV), The Drive reports. The move by the Chinese ride-hailing giant — Didi counts a whopping 450 million users — marks the first expansion of the company’s autonomous driving efforts beyond its home country.

Didi looks poised to be the second entrant into China’s autonomous mobility market, after domestic search giant Baidu, when it launches its cars in China next year, and this approval from the DMV could pave the way for the firm to enter the US shortly after.

Apple, meanwhile, deployed another 28 vehicles equipped with its self-driving technologies in the state, bringing the iPhone maker’s total test fleet to 55, according to TechCrunch. The additional vehicles mean Apple now has the second-highest number of test cars on the state’s roads after GM’s Cruise Automation division, which houses all the Detroit-based carmaker’s autonomous driving efforts.

Apple’s self-driving technology project has long been one of the industry’s most-watched efforts, and though the company got a late start compared with other prominent players Waymo, Ford, and Uber, this news shows its efforts are likely making significant progress.

California will be one of the most fiercely competitive geographies in the US for autonomous mobility services, even if Apple or Didi don’t enter the market for several years. California hosts more tests of the leading companies’ autonomous cars than any other state. Alphabet’s Waymo — widely considered the market leader — has been testing its vehicles in California since 2012 and logged nearly a million test miles since the start of 2016.

Additionally, GM’s Cruise division has been trialing a ride-hailing service for its own employees since the middle of last year. Uber, meanwhile, has been testing on-and-off in the state since last January, though it recently suspended all tests after one of its cars killed a pedestrian in Arizona. Even if Didi and Apple enter the California market after these other players, the space will be crowded, highly competitive, and initially fragmented.

Companies could thus be engaged in a fierce price war for the first few years to build out their market shares, but they may also need to find other ways to differentiate their services, such as marketing them as safer and more reliable than the rest.

Automakers are on the verge of a prolonged period of rapid change to the way they do business, thanks to the combined disruptive forces of growing on-demand mobility services and self-driving cars, which will start to come to market in the next couple of years.

By the end of 2019, Google spinoff Waymo, Uber, and GM all plan to have fleets of autonomous cars deployed in various US cities to provide on-demand rides for passengers. By eliminating the cost of the driver, these rides are expected to be far cheaper than typical Uber or Lyft rides, and even cheaper than owning a car for personal transportation.

Many industry experts are predicting that such cheap on-demand autonomous rides service will result in a long-term decline in car ownership rates — PwC predicts that the total number of cars on the road in the US and EU will drop from 556 million last year to 416 million in 2030.

This decline in car ownership represents an enormous threat to automakers’ traditional business models, forcing them to find alternative revenue sources. Many of these automakers, including GM, Ford, and Daimler, have plans to launch their own on-demand ride-hailing services with fleets of self-driving cars they will manufacture, potentially giving them a new stream of recurring revenue. This could set them up to take a sizeable share of a market that is expected to be worth trillions by 2030.

However, competing in the on-demand mobility market will pit legacy automakers against ride-hailing services from startups and tech giants that have far greater experience in acquiring and engaging consumers through digital channels. To succeed in what will likely be a hyper-competitive market for urban ride-hailing, automakers will have to foster new skill sets in their organizations, and transform from companies that primarily produce vehicles to ones that also manage vehicle fleets and customer relationships.

That will entail competing with startups and tech giants for software development and data science talent, as well as reforming innovation processes to keep pace with digital trendsetters. Automakers will also need to create unique mobile app and in-car experiences to lure customers. Finally, these automakers will face many overall barriers in the market, including convincing consumers that self-driving cars are safe, and dealing with a complex and evolving regulatory landscape.

In a new report, Business Insider Intelligence, Business Insider’s premium research service, delves into the future of the on-demand mobility space, focusing on how automakers will use fleets of self-driving vehicles to break into an emerging industry that’s been dominated thus far by startups like Uber and Lyft. We examine how the advent of autonomous vehicles will reshape urban transportation, and the impact it will have on traditional automakers. We then detail how automakers can leverage their core strengths to create new revenue sources with autonomous mobility services, and explore the key areas they’ll need to gain new skills and capabilities in to compete with mobility startups and tech giants that are also eyeing this opportunity.

Here are some of the key takeaways:

  • The low cost of autonomous taxis will eventually lead car ownership rates among urban consumers to decline sharply, putting automakers’ traditional business models at risk.
  • Many automakers plan to launch their own autonomous ride-hailing services with the self-driving cars they’re developing to replace losses from declining car sales, putting them in direct competition with mobility startups and tech giants looking to launch similar services.
  • Additionally, automakers plan to maximize utilization of their autonomous on-demand vehicles by performing last-mile deliveries, which will force them to compete with a variety of players in the parcel logistics industry.
  • Regulatory pressures could also push automakers to consider alternative mobility services besides on-demand taxis, such as autonomous on-demand shuttle or bus services.
  • Providing these types of services will force automakers to make drastic changes to their organizations to acquire new talent and skills, and not all automakers will succeed at that.

In full, the report:

  • Forecasts the growth of autonomous on-demand ride-hailing services in the US.
  • Examines the cost benefits of such services for consumers, and how they will reshape consumers’ transportation habits.
  • Details the different avenues for automakers to monetize the growth of autonomous ride-hailing.
  • Provides an overview of the various challenges that all players in the self-driving car space will need to overcome to monetize their investments in these new technologies in the coming years.
  • Explains the key factors that will be critical for automakers to succeed in this emerging market.
  • Offers examples of how automakers can differentiate their apps and services from competitors’.

Facebook data on 3 million users reportedly exposed through personality quiz

Facebook data on more than 3 million people who took a personality quiz was published onto a poorly protected website where it could have been accessed by unauthorized parties, according to New Scientist. In a report exposing the potential leak, New Scientist says that the data contained Facebook users’ answers to a personality trait test. While it didn’t include users’ names, in many cases it contained their age, gender, and relationship status. For 150,000 people, it even contained their status updates.

All that data was supposed to be accessible only to approved researchers through a collaborative website. However, New Scientist found that a username and password that granted access to the data could be found “in less than a minute” with an online search, enabling anyone to download the trove of personal information.

The data was gathered by a psychology test called myPersonality, according to New Scientist. Around half of the test’s 6 million participants are said to have allowed their information be anonymously shared with researchers. The team behind myPersonality let any researcher who agreed to use the data anonymously sign up to access the information that had been collected; in total, 280 people were given access, including employees of Facebook and other major tech companies, according to the report.

The basics here all sound remarkably similar to what happened with Cambridge Analytica, which gained access to information from more than 87 million Facebook users thanks to a personality test called thisisyourdigitallife. In both cases, the tests were initially made by University of Cambridge researchers. And both even had one researcher in common: Aleksandr Kogan.

Kogan was the creator of thisisyourdigitallife, and according to New Scientist, he was listed as part of the myPersonality project until mid-2014; it sounds as though the project began around 2009. The University of Cambridge told New Scientist that myPersonality was started before its creator joined the university and did not go through its ethics review process.

It’s not known whether the data was improperly accessed using the publicly available username and password. A Facebook spokesperson told New Scientist that the app was being investigated and would be banned if it “refuses to cooperate or fails our audit.” As part of its ongoing investigation into misuse of user data, Facebook said this morning that it had so far suspended 200 apps pending review. That included myPersonality.

While a leak of 3 million users’ data is far smaller than the 87 million obtained by Cambridge Analytica, the story still serves as another warning of how easily this information can spread around and just how detailed it can be. One of the bigger issues here is that, even though the data was supposed to be anonymized, New Scientist points out that it easily could have been re-identified using the extra Facebook information attached to each personality test.

Google’s AI sounds like a human on the phone — should we be worried?

It came as a total surprise: the most impressive demonstration at Google’s I/O conference yesterday was a phone call to book a haircut. Of course, this was a phone call with a difference. It wasn’t made by a human, but by the Google Assistant, which did an uncannily good job of asking the right questions, pausing in the right places, and even throwing in the odd “mmhmm” for realism.

The crowd was shocked, but the most impressive thing was that the person on the receiving end of the call didn’t seem to suspect they were talking to an AI. It’s a huge technological achievement for Google, but it also opens up a Pandora’s box of ethical and social challenges.

For example, does Google have an obligation to tell people they’re talking to a machine? Does technology that mimics humans erode our trust in what we see and hear? And is this another example of tech privilege, where those in the know can offload boring conversations they don’t want to have to a machine, while those receiving the calls (most likely low-paid service workers) have to deal with some idiot robot?

In other words, it was a typical Google demo: equal parts wonder and worry.

Google Assistant makes calls

Google Assistant will be able to make actual phone calls for you.

Posted by Circuit Breaker on Tuesday, May 8, 2018

But let’s start with the basics. Onstage, Google didn’t talk much about the details of how the feature, called Duplex, works, but an accompanying blog post adds some important context. First, Duplex isn’t some futuristic AI chatterbox, capable of open-ended conversation. As Google’s researchers explain, it can only converse in “closed domains” — exchanges that are functional, with strict limits on what is going to be said. “You want a table? For how many? On what day? And what time? Okay, thanks, bye.” Easy!

Mark Riedl, an associate professor of AI at Georgia Tech with a specialism in computer narratives, told The Verge that he thought Google’s Assistant would probably work “reasonably well,” but only in formulaic situations. “Handling out-of-context language dialogue is a really hard problem,” Riedl told The Verge. “But there are also a lot of tricks to disguise when the AI doesn’t understand or to bring the conversation back on track.”

One of Google’s demos showed perfectly how these tricks work. The AI was able to navigate a series of misunderstandings, but did so by rephrasing and repeating questions. This sort of thing is common for computer programs designed to talk to humans. Snippets of their conversation seem to show real intelligence, but when you analyze what’s being said, they’re revealed as preprogrammed gambits. Google’s blog post offers some fascinating details on this, spelling out some of the verbal ticks Duplex will use. These include elaborations (“for Friday next week, the 18th.”), syncs (“can you hear me?”), and interruptions (“the number is 212-” “sorry, can you start over?”).

It’s important to note that Google is calling Duplex an “experiment.” It’s not a finished product, and there’s no guarantee it’ll be widely available in this form, or widely available at all. (See also: the real-time translation feature Google showed off for its Pixel Buds last year. It worked flawlessly onstage, but was hit-and-miss in real life, and available only to Pixel phone owners.) Duplex works in just three scenarios at the moment: making reservations at a restaurant; scheduling haircuts; and asking businesses for their holiday hours. It will also only be available to a limited (and unknown) number of users sometime this summer.

One more big caveat: if a call goes wrong, a human takes over. In its blog post, Google says Duplex has a ”self-monitoring capability” that allows it recognize when conversations have moved beyond its capabilities. “In these cases, it signals to a human operator, who can complete the task,” says Google. This is similar to Facebook’s personal assistant M, which the company promised would use AI to deal with customer service scenarios, but ended up outsourcing an unknown amount of this work to humans instead. (Facebook closed this part of the service in January.)

All this gives us a clearer picture of what Duplex can do, but it doesn’t answer the questions of what effects Duplex will have. And as the first company to demo this tech, Google has a responsibility to face these issues head-on.

The obvious question is, should the company notify people that they’re talking to a robot? Google’s vice president of engineering, Yossi Matias, told CNET it was “likely” this would happen. Speaking to The Verge, Google went further, and said it definitely believes it has a responsibility to inform individuals. (Why this was never mentioned onstage isn’t clear.)

Many experts working in this area agree, although how exactly you would tell someone they’re speaking to an AI is a tricky question. If the Assistant starts its calls by saying “hello, I’m a robot” then the receiver is likely to hang up. More subtle indicators could mean limiting the realism of the AI’s voice or including a special tone during calls. Google tells The Verge it hopes a set of social norms will organically evolve that make it clear when the caller is an AI.

Joanna Bryson, an associate professor at the University of Bath who studies AI ethics, told The Verge that Google has on obvious obligation to disclose this information. If robots can freely pose as humans the scope for mischief is incredible; ranging from scam calls to automated hoaxes. Imagine getting a panicked phone call from someone saying there was a shooting nearby. You ask them some questions, they answer — enough to convince you they’re real — and then hang up, saying they got the wrong number. Would you be worried?

But Bryson says letting companies manage this themselves won’t be enough, and there will need to be new laws introduced to protect the public. “Unless we regulate it, some company in a less conspicuous position than Google will take advantage of this technology,” says Bryson. “Google may do the right thing but not everyone is going to.”

And if this technology becomes widespread, it will have other, more subtle effects, the type which can’t be legislated against. Writing for The Atlantic, Alexis Madrigal suggests that small talk — either during phone calls or conversations on the street — has an intangible social value. He quotes urbanist Jane Jacobs, who says “casual, public contact at a local level” creates a “web of public respect and trust.” What do we lose if we give people another option to avoid social interactions, no matter how minor?

One effect of AI phone calls might be to make us all a little bit ruder. If we can’t tell the difference between humans and machines on the phone, will we treat all phone conversations with suspicion? We might start cutting off real people during calls, telling them: “Just shut up and let me speak to a human.” And if it becomes easier for us to book reservations at a restaurant, might we take advantage of that fact and book them more speculatively, not caring if we don’t actually show up? (Google told The Verge it would limit both the number of daily calls a business can receive from Assistant, and the number of calls Assistant can place, in order to stop people using the service for spam.)

There are no obvious answers to these questions, but as Bryson points out, Google is at least doing the world a service by bringing attention to this technology. It’s not the only company developing these services, and it certainly won’t be the only one to use them. “It’s a huge deal that they’re showcasing it,” says Bryson. “It’s important that they keep doing demos and videos so people can see this stuff is happening […] What we really need is an informed citizenry.”

In other words, we need to have a conversation about all this, before the robots start doing the talking for us.