Facebook admits it scans what you send via Messenger

If you’ve ever used Facebook’s Messenger chat app to share texts and links with your friends, Facebook pored through it.

Facebook is automatically scanning Facebook Messenger conversations for unacceptable content, the company has confirmed, monitoring both text and images shared. The revelation stemmed from an unexpected side detail from an interview with CEO Mark Zuckerberg.

On Wednesday, Facebook founder and CEO Mark Zuckerberg agreed to testify before the House Energy and Commerce Committee on April 11th regarding his company’s data privacy practices. On the same day, Facebook said it will propose new updates to its data policy and its terms of services to reflect more transparency about how the company collects and shares information on Facebook, Messenger, Instagram and WhatsApp.

“It’s important to show people in black and white how our products work – it’s one of the ways people can make informed decisions about their privacy,” wrote Facebook’s Chief Privacy Officer Erin Egan and Deputy General Counsel Ashlie Beringer in a blog post. He also added that,

“These updates are about making things clearer. We’re not asking for new rights to collect, use or share your data on Facebook. We’re also not changing any of the privacy choices you’ve made in the past.”

Zuckerberg gave an example of how it works in a recent interview, where the system had spotted messages related to the ethnic cleansing in Myanmar. At the time, the chief exec said, the system was able to step in and block the transmission of the messages through Facebook’s network.

“So that’s the kind of thing where I think it is clear that people were trying to use our tools in order to incite real harm,”

– Zuckerberg

The company is working to make its privacy policies clearer, but still ends up with gaps between what it says users have agreed to, and what users think they actually agreed to. The Messenger scanning systems “are very similar to those that other internet companies use today,” the company said.

Read more news at Learn2create – NEWS

 

Scientists create self-replicating neural network

A pair of researchers from Columbia University recently built a self-replicating AI system.

Instead of painstakingly creating the layers of a neural network and guiding it’s development as it becomes more advanced – they’ve automated the process. The researchers, Oscar Chang and Hod Lipson, published their fascinating paper titled “Neural Network Quine” earlier this month, and with it a novel new method for “growing” a neural network.

Here’s what  they argue in a paper popped onto arXiv this month –

“The primary motivation here is that AI agents are powered by deep learning, and a self-replication mechanism allows for Darwinian natural selection to occur, so a population of AI agents can improve themselves simply through natural selection – just like in nature – if there was a self-replication mechanism for neural networks.”

The researchers compare their work to quines, a type of computer program that learns to produces copies of its source code. In neural networks, however, instead of the source code it’s the weights – which determine the connections between the different neurons – that are being cloned.

The researchers set up a “vanilla quine” network, a feed-forward system that produces its own weights as outputs. The vanilla quine network can also be used for self-replicating its weights and solve a task. They decided to use it for image classification on the MNIST dataset, where computers have to identify the correct digit from a set of handwritten numbers from zero to nine.

Accuracy?

The test network required 60,000 MNIST images for training, another 10,000 for testing. And after 30 runs, the quine network had an accuracy rate of 90.41 per cent. It’s not a bad start, but its performance doesn’t really compare to larger, more sophisticated image recognition models out there.

The paper states that the “self-replicating occupies a significant portion of the neural network’s capacity.” In other words, the neural network cannot focus on the image recognition task if it also has to self-replicate.

“This is an interesting finding: it is more difficult for a network that has increased its specialization at a particular task to self-replicate. This suggests that the two objectives are at odds with each other,”

the paper said.

In the future AI will create itself, advance itself, and integrate new neural networks through a natural selection process. What’s the worst that could happen?

Read more news on Learn2Create – NEWS

This tiny wearable knows what you’ve been eating, drinking, and smoking

Wireless real-time monitoring through this wearable could add precision to the linkage between diet and health.

recent breakthrough in miniaturized sensor technology could end up taking a bite out of personal privacy. Researchers developed a wearable small enough to stick on a human tooth virtually unnoticed. It’s capable of carrying out wireless transmission of data on any chemicals it comes in contact with.

The team, researchers from Tufts University School of Engineering, set out to create a better solution for monitoring dietary intake. Their work could prove invaluable to medical researchers and has the potential to save innumerable lives.

The device could give doctors real-time alerts on patients based on actual chemical intake. This means that rather than wait for an emergency, when it’s often too late, they could respond before there’s a problem.

Imagine what a difference this could make for people who need to monitor glucose or sodium levels – this wearable could be revolutionary in the field of preventative medicine. And that’s just the tip of the iceberg.

Unfortunately, in 2018, there’s also an ugly side to any thing that collects personal data, as evidenced by the currently unfolding Cambridge Analytica and Facebook scandal.

The sensor, however, can change its “color.” For example, if the central layer takes on salt, or ethanol, its electrical properties will shift, causing the sensor to absorb and transmit a different spectrum of radiofrequency waves, with varying intensity. That is how nutrients and other analytes can be detected and measured.

Omenetto, in a Tufts University post, said:

“In theory we can modify the BIO-RESPONSIVE layer in these sensors to target other chemicals – we are really limited only by our creativity. We have extended common RFID technology to a sensor package that can dynamically read and transmit information on its environment, whether it is affixed to a tooth, to skin, or any other surface.”

You’d probably notice if someone put a shiny square on your front tooth while you were sleeping (unless you never smile), but you may not notice one behind your ear or affixed to your scalp right away. And if that bothers you, perhaps you should avoid considering the “any other surface” bit, because without some sort of James Bond spy equipment or advanced training you’d have almost no chance of noticing a dozen of these stuck behind your walls, inside your toilet, or under the bumper of your car.

Make no mistake, this is important research that will almost certainly save lives – but, once this wearable is out in the wild, it’s pretty likely to be another tool for gathering our personal data. It’s not up to the researchers to ensure bad actors don’t missaporopriate their work, it’s up to our regulators and lawmakers to ensure that those who do are held accountable.

For now, it’s worth applauding the amazing work this team has done. While, it’s important to point out the potential dangers of any new technology we shouldn’t throw the baby out with the bathwater.

Read more news on Learn2Create – NEWS

Elon Musk has removed Tesla and SpaceX’s Facebook pages and takes a stand on #DeleteFacebook

SpaceX CEO Elon Musk tweeted on Friday that he had never seen the SpaceX Facebook page and planned to delete it.

“It is time. #deletefacebook,” Brian Acton, the co-founder of the messaging service WhatsApp, tweeted on Tuesday, the day the Federal Trade Commission opened an investigation into how Cambridge Analytica accessed the Facebook data. For whatever reason, Musk decided to respond to Acton’s tweet on Friday. “What’s Facebook?” he replied. He appeared to be joking, but someone decided to call his bluff.

“Delete SpaceX page on Facebook if you’re the man?” @serdarsprofile said.

“I didn’t realize there was one. Will do,” Musk replied. At this point, it wasn’t clear whether Musk was trolling or being serious, so others joined in.

After someone showed Musk a screengrab of the SpaceX Facebook page, he noted it was the first time he had seen it and that it would “be gone soon.” Then someone prompted him to delete Tesla’s Facebook page, with Elon responding that it “looks lame anyway.” And just for good measure, it seems that the Facebook page for Tesla-owned Solar City has disappeared as well.

Musk’s companies have a combined userbase of over 5.2 million users between the two pages. A form of free advertising (given Musk’s statement that Tesla “doesn’t advertise”) and viral marketing through live streaming of Space X launches and a strong cult-like following of loyal customers and window shoppers.

“I don’t use FB & never have, so don’t think I’m some kind of martyr or my companies are taking a huge blow also we don’t advertise or pay for endorsements, so … don’t care.”

– Musk

Zuckerberg expressed some frustration after a SpaceX rocket exploded on a Florida launchpad in 2016, destroying a satellite that Facebook was planning to use. “As I’m here in Africa, I’m deeply disappointed to hear that SpaceX’s launch failure destroyed our satellite that would have provided connectivity to so many entrepreneurs and everyone else across the continent,” Zuckerberg wrote on Facebook hours after the incident.

The following year, Musk said in a tweet that Zuckerberg’s understanding of the threat posed by artificial intelligence “is limited.” That is probably when things started to heat up between the two.

Elon claims he doesn’t use Facebook and never has. The effects of the removal of the Facebook pages will be felt most by the people employed to run them, so Musk doesn’t deserve any praise for publicly boycotting the troubled social network. For Musk, the harder thing to do would be to swear off Instagram, which Facebook owns and which he loves. Which is why he said,

 “Instagram’s probably ok imo, so long as it stays fairly independent”

Read more news on Learn2Create – NEWS

Facebook suspends data firm with Trump ties

Facebook has suspended Cambridge Analytica over allegations that it kept improperly obtained user data after telling the social media giant it had been deleted.

Facebook Inc on Friday said it was suspending political data analytics firm Cambridge Analytica, which worked for President Donald Trump’s 2016 election campaign, after finding data privacy policies had been violated. They said in a statement that it suspended Cambridge Analytica and its parent group Strategic Communication Laboratories (SCL) after receiving reports that they did not delete information about Facebook users that had been inappropriately shared.

Cambridge Analytica was not immediately available for comment. Facebook did not mention the Trump campaign or any political campaigns in its statement, attributed to company Deputy General Counsel Paul Grewal.

“After the discovery of this violation in 2015, Facebook demanded certifications from Kogan and all parties he had given data to that the information had been destroyed.”      -GrewAL

Cambridge Analytica’s goal, starting in 2013, was to use data modeling to influence voters based on their emotional makeup. Data scientist and former Cambridge Analytica employee Christopher Wylie, speaking to The Guardian, described this as an effort to “target their inner demons.”

Trump’s campaign hired Cambridge Analytica in June 2016 and paid it more than $6.2 million, according to Federal Election Commission records. Cambridge Analytica says it uses “behavioral microtargeting”, or combining analysis of people’s personalities with demographics, to predict and influence mass behavior. It says it has data on 220 million Americans, two thirds of the U.S. population.

Trump campaign officials downplayed Cambridge Analytica’s role, saying they briefly used the company for television advertising and paid some of its most skilled data employees.It denied using Cambridge Analytic’s data, saying it instead relied on information from the Republican National Committee (RNC).

“Using the RNC data was one of the best choices the campaign made. Any claims that voter data were used from another source to support the victory in 2016 are false.”             -Trump Campaign

Read More News on Learn2Create – News Category

Android P Developer Preview arrives with new notification panel, notch support and much more.

Android P comes with exquisite features like notch support to indoor navigation.

It’s just 3 months into 2018 and Google is once again releasing an early preview of the next major version of Android. The Android P Developer Preview is out right now for developers and eager Android enthusiasts to take for a test drive. It’s still very early in the news cycle for Android P, and while we haven’t installed it yet, we do have a lengthy Google Blog post to draw details from. Here’s a look at the top five features from the first developer preview of Android P.

1. The new status bar with notch support

Let us give it to Apple for having introduced to us the iconic ‘notch’. Manufacturers are already experimenting with the design on their upcoming smartphones and to help them leverage it is Android P. The next version of Android comes with a native support for the notch or ‘display cutout’, as Google prefers to call it. The notch simulator allows developers to simulate a full-screen experience around the notch to check how apps deal with the different types of cutouts.

 

2. The all new Notification Panel.

The new look for notifications also includes a new look for notifications from messaging apps: they will be able to include recent lines from your conversation if you want to reply inline right inside the notification. It’s similar to how iOS handles iMessage notifications, but without all that force-touch fuss. Apps will also be able to include “Smart replies” , images, and stickers directly in the notification.

 

3. Indoor navigation with Wi-Fi RTT

Accurate indoor positioning has been a long-standing challenge that opens new opportunities for location-based services. Android P adds platform support for the IEEE 802.11mc WiFi protocol — also known as WiFi Round-Trip-Time (RTT) — to let you take advantage of indoor positioning in your apps.

 

4. More media support

Android P adds support for the HDR VP9 Profile 2 codec, making it easy to deliver HDR video from sources like YouTube, Play Movies, and others. There’s also support for the High Efficiency Image Format (HEIF), a modern JPEG alternative that is already supported in iOS and Mac OS. JPEG is more than 20 years old, so it should be no surprise that a brand-new image format can do a better job than JPEG.

 

5. Multi-camera API

With phones shipping with more and more cameras, Android’s existing camera API is getting a bit of an upgrade. If you’re running Android P on a phone with dual front or back cameras, apps will get a camera stream that “automatically switches between two or more cameras.” Google says this will allow developers to “create innovative features not possible with just a single camera.” For Google in particular, this will probably help the company’s augmented reality framework, ARCore, which today cannot use dual rear cameras to see in 3D.

The above are just the highlights. There has also been work done on the Kotlin programming language, a new version of the neural network API introduced in Android 8.1, and another new fingerprint API.

The Android Developer Preview releases are only ever the AOSP side of Android. There is a whole world of proprietary Google code that accompanies any major Android release, which we aren’t seeing right now—what we have only gives us half the picture. This is also just the first developer preview release of Android P. There will be many more preview releases coming down the road, with a final release usually coming around September. Google promises “more features and capabilities” in future preview releases, and the company says it will have “even more to share at Google I/O.” Let’s hope for the best!

Read More News on Learn2Create – News Category

Matrix Voice – Create your own AI-powered voice assistant

The Matrix Voice development board is a Raspberry Pi add-on you can use to build your own voice assistant.

The Matrix Voice is capable of doing much more than just voice functions because of its myriad of additional sensors. It can detect altitude, temperature, humidity, and motion. The Voice board packs all the features you need into a 3.5 inch disc that mounts directly to a Raspberry Pi computer. With it you’re getting eight dedicated microphones and an FPGA to handle all the algorithms and audio processing.

MATRIX Voice is an open-source VOICE RECOGNITION platform consisting of a 3.14-inches in diameter dev board, with a radial array of 7 MEMS microphones connected to a Xilinx Spartan6 FPGA & 64 Mbit SDRAM with 18 RGBW LED’s & 64 GPIO pins. Providing developers the opportunity to integrate custom voice & hardware-accelerated machine learning technology right onto the silicon. An ESP32 Wi-Fi / BT enabled 32 bit microcontroller version is available. It’s for makers, industrial and home IoT engineers.

The Disc
The MATRIX Voice Disc

At glance, the FPGA-driven development board for the Raspberry Pi is a developer’s dream. To simplify hardware application development, MATRIX Voice includes MATRIX OS, which allows developers to build hardware applications in just a few lines of code using JavaScript.

The 7 MEMs microphone array on MATRIX Voice allows you to leverage voice recognition in your app creations by using the latest online cognitive services including Microsoft Cognitive Service, Amazon Alexa Voice Service, Google Speech API, Wit.ai and Houndify. You can trigger events based on sound detection, such as receiving a text message when your dog is barking back home.

You can also build your own Amazon’s Alexa using a Raspberry Pi and MATRIX Voice. In the Video, they have used Alexa Voice Services (AVS); the service used by Amazon Echo that allows them to accomplish many of the challenging tasks in the project.

Read more news on Learn2createNEWS.

SpaceX Rocket overshot Mars’s orbit and swept towards the Asteroid Belt

Elon Musk’s super-rocket from SpaceX has taken flight and overshot Mars’ orbit, going further out into the solar system than originally planned.

Just hours after Tuesday’s spectacular launch from Florida of Falcon Heavy, the world’s most powerful space rocket, the billionaire founder of the private spaceflight company SpaceX admitted Starman had been a little heavy on the gas and would travel well beyond the intended target of Mars.

“Third burn successful. Exceeded Mars orbit and kept going to the Asteroid Belt,” Musk said in a tweet that seemed to confirm the final destination of the mission had changed.

The rocket was supposed to have one final engine burn before launching the car out into its final orbit, but it appears the engine was a little too strong. The force of the rocket has shocked and impressed planetary scientists.

Falcon Heavy – Trajectory

After launch, the Tesla cruised through space for a good six hours. This “coast” phase was meant to show off a special orbital maneuver for the US Air Force. Then the rocket completed one final engine burn in space and put the car on its final orbit. It looks like that burn might have happened somewhere over Southern California, as some people in the area started reporting sightings of the rocket igniting in the night sky after 9:30PM ET on Tuesday.

“We’re looking at the issue. The centre core obviously didn’t land on the drone ship,” – Musk

South African-born Musk, the billionaire former CEO of the online money transfer company PayPal, said he had invested more than half a billion dollars to get Falcon Heavy off the ground and hoped the success of its first flight would lead to more competition with private spaceflight rivals, including Blue Origin, owned by the Amazon tycoon Jeff Bezos.

The current private rockets aren’t nearly as capable as rockets we had in the 60s, although more cost effective. The BFR that is coming will be as good as the Saturn V, which did the original lunar landings.

Read More News on Learn2Create – News Category.

 

Chatbots to replace humans at customer support very soon

AI and Chatbots Are Transforming The Customer Experience.

Artificial Intelligence is dramatically changing business as chatbots fueled by AI are becoming a viable customer service channel. The best ones deliver a customer experience in which customers cannot tell if they are communicating with a human or a computer. AI has come a long way in recognizing the content – and context – of customers’ requests and questions.

Studies and reports show that customers want quick, frictionless solutions to their problems and answers to their questions. No doubt there are acceptance issues for AI and chatbots. Some customers have always used traditional phone support and have a hard time accepting anything else. But, there is a growing contingent of customers who are increasingly open to new technology, especially if it can enhance their customer experience. As the technology improves and acceptance grows, chatbots, powered by AI, will have a strong role in customer service and support.

Salesforce first launched an SMS chatbot product in 2014 and has since expanded it to include Facebook Messenger. The company also offers a product called Live Agent Chat, which facilitates human-to-human interactions. Meredith Flynn-Ripely, vice president of mobile messaging at Salesforce predicts that by 2020, the average person will have more conversations a day with bots than they do with their spouse.

“We really see bots as changing the job description and turning agents into intelligent problem solvers”
– Meredith Flynn-Ripely

Leveraging its vast amount of user data, Facebook opened up its messenger platform to developers and businesses in April 2016. Like Salesforce, Facebook already has a lot of data about its users. Artificial intelligence and chatbots are only as smart as the data they have access to. Chatbots built on top of Facebook’s system will likely have more advanced conversation abilities than chatbots built from scratch.

Shop Spring’s Assistant

Many consumers would rather handle customer service issues by chat than over the phone – 56% according to a Nielsen study commissioned by Facebook. But this doesn’t mean the future of chatbots is limited to words. Soon human-voice generating machines would gain popularity in a number of industries as an important customer service tool.

Using artificial intelligence to communicate with customers can lead to better engagement and understanding. Yet there are some technological barriers that need to be overcome before the technology is as seamless as its engineers dream it could be. Once these hurdles are overcome, chatbots could have a much bigger role in our day-to-day lives than they do now.

Let us know about your views in the comments section below!

Read More News on Learn2Create – News Category.

Google AI Generates Human Voice – Artificial intelligence to replace human interaction?

Google has been excelling in the AI sphere for a while, and its Assistant is proof of how far it has come.

Not only does it perform most actions through voice recognition, but it also provides text feedback in a voice that is ever so close to sounding as natural as humans. From stiff and unnatural to smooth and life-like voice generation, Google has come a long way.

After investigating a recently published Google research paper (via Quartz), it looks like we might be closer to this reality than you might think. A research paper published by Google this month talks about a text-to-speech system they call Tacotron 2. In it, the researchers claim the AI can imitate human voice with excellent accuracy.

The system is the second official generation of the technology by Google, which consists of two deep neural networks. The first network translates the text into a spectrogram (pdf), a visual way to represent audio frequencies over time. That spectrogram is then fed into WaveNet, a system from an AI research lab. It reads the chart and generates the corresponding audio elements accordingly.

In the last section, Google provides side-by-side examples of a human voice alongside the AI created one — with outstanding results.

Here’s the AI generated voice and also the human version of the same.

“George Washington was the first President of the United States.”

 

The Google researchers also demonstrate that Tacotron 2 can handle hard-to-pronounce words and names, as well as improvise it. For instance, capitalized words are stressed, as someone would do when indicating that specific word is an important part of a sentence.

Yet there is still a vast gap between an AI that can read aloud like a human and one that can converse like a human. However, the system is only trained to mimic the one female voice. To speak like a male or different female, Google would need to train the system again.

Let us know about your views in the comments section below!

Read More News on Learn2Create – News Category.