If you’ve ever used Facebook’s Messenger chat app to share texts and links with your friends, Facebook pored through it.
Facebook is automatically scanning Facebook Messenger conversations for unacceptable content, the company has confirmed, monitoring both text and images shared. The revelation stemmed from an unexpected side detail from an interview with CEO Mark Zuckerberg.
On Wednesday, Facebook founder and CEO Mark Zuckerberg agreed to testify before the House Energy and Commerce Committee on April 11th regarding his company’s data privacy practices. On the same day, Facebook said it will propose new updates to its data policy and its terms of services to reflect more transparency about how the company collects and shares information on Facebook, Messenger, Instagram and WhatsApp.
“It’s important to show people in black and white how our products work – it’s one of the ways people can make informed decisions about their privacy,” wrote Facebook’s Chief Privacy Officer Erin Egan and Deputy General Counsel Ashlie Beringer in a blog post. He also added that,
“These updates are about making things clearer. We’re not asking for new rights to collect, use or share your data on Facebook. We’re also not changing any of the privacy choices you’ve made in the past.”
Zuckerberg gave an example of how it works in a recent interview, where the system had spotted messages related to the ethnic cleansing in Myanmar. At the time, the chief exec said, the system was able to step in and block the transmission of the messages through Facebook’s network.
“So that’s the kind of thing where I think it is clear that people were trying to use our tools in order to incite real harm,”
The company is working to make its privacy policies clearer, but still ends up with gaps between what it says users have agreed to, and what users think they actually agreed to. The Messenger scanning systems “are very similar to those that other internet companies use today,” the company said.
A pair of researchers from Columbia University recently built a self-replicating AI system.
Instead of painstakingly creating the layers of a neural network and guiding it’s development as it becomes more advanced – they’ve automated the process. The researchers, Oscar Chang and Hod Lipson, published their fascinating paper titled “Neural Network Quine” earlier this month, and with it a novel new method for “growing” a neural network.
Here’s what they argue in a paper popped onto arXiv this month –
“The primary motivation here is that AI agents are powered by deep learning, and a self-replication mechanism allows for Darwinian natural selection to occur, so a population of AI agents can improve themselves simply through natural selection – just like in nature – if there was a self-replication mechanism for neural networks.”
The researchers compare their work to quines, a type of computer program that learns to produces copies of its source code. In neural networks, however, instead of the source code it’s the weights – which determine the connections between the different neurons – that are being cloned.
The researchers set up a “vanilla quine” network, a feed-forward system that produces its own weights as outputs. The vanilla quine network can also be used for self-replicating its weights and solve a task. They decided to use it for image classification on the MNIST dataset, where computers have to identify the correct digit from a set of handwritten numbers from zero to nine.
The test network required 60,000 MNIST images for training, another 10,000 for testing. And after 30 runs, the quine network had an accuracy rate of 90.41 per cent. It’s not a bad start, but its performance doesn’t really compare to larger, more sophisticated image recognition models out there.
The paper states that the “self-replicating occupies a significant portion of the neural network’s capacity.” In other words, the neural network cannot focus on the image recognition task if it also has to self-replicate.
“This is an interesting finding: it is more difficult for a network that has increased its specialization at a particular task to self-replicate. This suggests that the two objectives are at odds with each other,”
the paper said.
In the future AI will create itself, advance itself, and integrate new neural networks through a natural selection process. What’s the worst that could happen?
Wireless real-time monitoring through this wearable could add precision to the linkage between diet and health.
A recent breakthrough in miniaturized sensor technology could end up taking a bite out of personal privacy. Researchers developed a wearable small enough to stick on a human tooth virtually unnoticed. It’s capable of carrying out wireless transmission of data on any chemicals it comes in contact with.
The team, researchers from Tufts University School of Engineering, set out to create a better solution for monitoring dietary intake. Their work could prove invaluable to medical researchers and has the potential to save innumerable lives.
The device could give doctors real-time alerts on patients based on actual chemical intake. This means that rather than wait for an emergency, when it’s often too late, they could respond before there’s a problem.
Imagine what a difference this could make for people who need to monitor glucose or sodium levels – this wearable could be revolutionary in the field of preventative medicine. And that’s just the tip of the iceberg.
The sensor, however, can change its “color.” For example, if the central layer takes on salt, or ethanol, its electrical properties will shift, causing the sensor to absorb and transmit a different spectrum of radiofrequency waves, with varying intensity. That is how nutrients and other analytes can be detected and measured.
“In theory we can modify the BIO-RESPONSIVE layer in these sensors to target other chemicals – we are really limited only by our creativity. We have extended common RFID technology to a sensor package that can dynamically read and transmit information on its environment, whether it is affixed to a tooth, to skin, or any other surface.”
You’d probably notice if someone put a shiny square on your front tooth while you were sleeping (unless you never smile), but you may not notice one behind your ear or affixed to your scalp right away. And if that bothers you, perhaps you should avoid considering the “any other surface” bit, because without some sort of James Bond spy equipment or advanced training you’d have almost no chance of noticing a dozen of these stuck behind your walls, inside your toilet, or under the bumper of your car.
Make no mistake, this is important research that will almost certainly save lives – but, once this wearable is out in the wild, it’s pretty likely to be another tool for gathering our personal data. It’s not up to the researchers to ensure bad actors don’t missaporopriate their work, it’s up to our regulators and lawmakers to ensure that those who do are held accountable.
For now, it’s worth applauding the amazing work this team has done. While, it’s important to point out the potential dangers of any new technology we shouldn’t throw the baby out with the bathwater.
SpaceX CEO Elon Musk tweeted on Friday that he had never seen the SpaceX Facebook page and planned to delete it.
“It is time. #deletefacebook,” Brian Acton, the co-founder of the messaging service WhatsApp, tweeted on Tuesday, the day the Federal Trade Commission opened an investigation into how Cambridge Analytica accessed the Facebook data. For whatever reason, Musk decided to respond to Acton’s tweet on Friday. “What’s Facebook?” he replied. He appeared to be joking, but someone decided to call his bluff.
“Delete SpaceX page on Facebook if you’re the man?” @serdarsprofile said.
“I didn’t realize there was one. Will do,” Musk replied. At this point, it wasn’t clear whether Musk was trolling or being serious, so others joined in.
After someone showed Musk a screengrab of the SpaceX Facebook page, he noted it was the first time he had seen it and that it would “be gone soon.” Then someone prompted him to delete Tesla’s Facebook page, with Elon responding that it “looks lame anyway.” And just for good measure, it seems that the Facebook page for Tesla-owned Solar City has disappeared as well.
Musk’s companies have a combined userbase of over 5.2 million users between the two pages. A form of free advertising (given Musk’s statement that Tesla “doesn’t advertise”) and viral marketing through live streaming of Space X launches and a strong cult-like following of loyal customers and window shoppers.
“I don’t use FB & never have, so don’t think I’m some kind of martyr or my companies are taking a huge blow also we don’t advertise or pay for endorsements, so … don’t care.”
Zuckerberg expressed some frustration after a SpaceX rocket exploded on a Florida launchpad in 2016, destroying a satellite that Facebook was planning to use. “As I’m here in Africa, I’m deeply disappointed to hear that SpaceX’s launch failure destroyed our satellite that would have provided connectivity to so many entrepreneurs and everyone else across the continent,” Zuckerberg wrote on Facebook hours after the incident.
The following year, Musk said in a tweet that Zuckerberg’s understanding of the threat posed by artificial intelligence “is limited.” That is probably when things started to heat up between the two.
Elon claims he doesn’t use Facebook and never has. The effects of the removal of the Facebook pages will be felt most by the people employed to run them, so Musk doesn’t deserve any praise for publicly boycotting the troubled social network. For Musk, the harder thing to do would be to swear off Instagram, which Facebook owns and which he loves. Which is why he said,
“Instagram’s probably ok imo, so long as it stays fairly independent”
Facebook has suspended Cambridge Analytica over allegations that it kept improperly obtained user data after telling the social media giant it had been deleted.
Facebook Inc on Friday said it was suspending political data analytics firm Cambridge Analytica, which worked for President Donald Trump’s 2016 election campaign, after finding data privacy policies had been violated. They said in a statement that it suspended Cambridge Analytica and its parent group Strategic Communication Laboratories (SCL) after receiving reports that they did not delete information about Facebook users that had been inappropriately shared.
Cambridge Analytica was not immediately available for comment. Facebook did not mention the Trump campaign or any political campaigns in its statement, attributed to company Deputy General Counsel Paul Grewal.
“After the discovery of this violation in 2015, Facebook demanded certifications from Kogan and all parties he had given data to that the information had been destroyed.” -GrewAL
Cambridge Analytica’s goal, starting in 2013, was to use data modeling to influence voters based on their emotional makeup. Data scientist and former Cambridge Analytica employee Christopher Wylie, speaking to The Guardian, described this as an effort to “target their inner demons.”
Trump’s campaign hired Cambridge Analytica in June 2016 and paid it more than $6.2 million, according to Federal Election Commission records. Cambridge Analytica says it uses “behavioral microtargeting”, or combining analysis of people’s personalities with demographics, to predict and influence mass behavior. It says it has data on 220 million Americans, two thirds of the U.S. population.
Trump campaign officials downplayed Cambridge Analytica’s role, saying they briefly used the company for television advertising and paid some of its most skilled data employees.It denied using Cambridge Analytic’s data, saying it instead relied on information from the Republican National Committee (RNC).
“Using the RNC data was one of the best choices the campaign made. Any claims that voter data were used from another source to support the victory in 2016 are false.” -Trump Campaign
Android P comes with exquisite features like notch support to indoor navigation.
It’s just 3 months into 2018 and Google is once again releasing an early preview of the next major version of Android. The Android P Developer Preview is out right now for developers and eager Android enthusiasts to take for a test drive. It’s still very early in the news cycle for Android P, and while we haven’t installed it yet, we do have a lengthy Google Blog post to draw details from. Here’s a look at the top five features from the first developer preview of Android P.
1. The new status bar with notch support
Let us give it to Apple for having introduced to us the iconic ‘notch’. Manufacturers are already experimenting with the design on their upcoming smartphones and to help them leverage it is Android P. The next version of Android comes with a native support for the notch or ‘display cutout’, as Google prefers to call it. The notch simulator allows developers to simulate a full-screen experience around the notch to check how apps deal with the different types of cutouts.
2. The all new Notification Panel.
The new look for notifications also includes a new look for notifications from messaging apps: they will be able to include recent lines from your conversation if you want to reply inline right inside the notification. It’s similar to how iOS handles iMessage notifications, but without all that force-touch fuss. Apps will also be able to include “Smart replies” , images, and stickers directly in the notification.
3. Indoor navigation with Wi-Fi RTT
Accurate indoor positioning has been a long-standing challenge that opens new opportunities for location-based services. Android P adds platform support for the IEEE 802.11mc WiFi protocol — also known as WiFi Round-Trip-Time (RTT) — to let you take advantage of indoor positioning in your apps.
4. More media support
Android P adds support for the HDR VP9 Profile 2 codec, making it easy to deliver HDR video from sources like YouTube, Play Movies, and others. There’s also support for the High Efficiency Image Format (HEIF), a modern JPEG alternative that is already supported in iOS and Mac OS. JPEG is more than 20 years old, so it should be no surprise that a brand-new image format can do a better job than JPEG.
5. Multi-camera API
With phones shipping with more and more cameras, Android’s existing camera API is getting a bit of an upgrade. If you’re running Android P on a phone with dual front or back cameras, apps will get a camera stream that “automatically switches between two or more cameras.” Google says this will allow developers to “create innovative features not possible with just a single camera.” For Google in particular, this will probably help the company’s augmented reality framework, ARCore, which today cannot use dual rear cameras to see in 3D.
The above are just the highlights. There has also been work done on the Kotlin programming language, a new version of the neural network API introduced in Android 8.1, and anothernew fingerprint API.
The Android Developer Preview releases are only ever the AOSP side of Android. There is a whole world of proprietary Google code that accompanies any major Android release, which we aren’t seeing right now—what we have only gives us half the picture. This is also just the first developer preview release of Android P. There will be many more preview releases coming down the road, with a final release usually coming around September. Google promises “more features and capabilities” in future preview releases, and the company says it will have “even more to share at Google I/O.” Let’s hope for the best!
It’s 2018 and the world is connected now more than ever. The internet has enabled countless innovations and technology that has changed the way we function. Currently, more than 3 billion people worldwide use the web for news, entertainment, communication and a myriad of other activities. Everything we see on the web is only the tip of the iceberg. To give you an estimate, through our search engines we can access only 0.03% of the information available online. So what about the rest of it? That is where The Dark Web comes into play.
What is The Dark web?
The sites that traditional searches yield are part of what’s known as the Surface Web, which is comprised of indexed pages that a search engine’s web crawlers are programmed to retrieve. The Dark web consists of data that you won’t locate with a simple Google search.
So what is the dark web really? The Dark Web is a part of the world wide web that requires special software to access. The Dark web sites are effectively “hidden”, in that they have not been indexed by a search engine and can only be accessed if you know the address of the site. Special markets also operate within the dark web called, “darknet markets”, which mainly sell illegal products like drugs and firearms, paid for in cryptocurrency.
The dark web requires a specific software program (the Tor browser) to do the trick, and it offers you a special layer of anonymity that the surface web and the deep web cannot.
Deep web VS Dark web
There is a widespread misconception that Deep web is the same as the Dark web. Contrary to popular belief, the deep web and the dark web are actually two separate definitions. A majority of the public that’s unacquainted with the dark web, tend to use these two concepts interchangeably. Here’s an infographic by Dark Web News that explains the concept with clarity.
The contrast between the deep web and the dark web is often visually described by comparing it to an iceberg. Visualize an iceberg that is partly submerged.
Everything that is accessible to the average internet user is considered as the Surface web. This is seen as the part of the iceberg that is above water. Surface web includes Facebook, Twitter, Wikipedia and more.
Just below the surface of the water is the Deep Web. It’s made up of of the same general host names as sites on the surface web, but along with the extension of those domains. The deep web is the majority of the internet as a whole.
What people don’t realize is that there’s a lot the invisible internet has to offer besides illegal activity. The deep web is used to obtain banned books, organize meetings in secret and store archived personal data. Harvard’s internal communications system is an example.
The last part of the iceberg that is submerged deep underwater represents the Dark web. It is a subset of the deep web that’s only accessible through software that guards anonymity. The Dark web contains URLs that end in .onion rather than .com, .gov or .edu.
The dark web requires a specific software program (the Tor browser) to do the trick, and it offers you a special layer of anonymity that the surface web and the deep web cannot. Using TOR itself isn’t illegal, nor is going on many deep web websites. The only illegal activity is what would be illegal out in the real world.
Illegal guns, pornography, terrorism and drug markets rely on the dark web. These are run by people who want to stay anonymous.
Some illegal markets on the Dark side are:
Guns and ammunition
Hackers and malicious services
Assassins and hitmen
Accessing the Dark web:
The simplest way to start using Tor is to download the Tor browser bundle (assuming you’re on Windows). You can get it at: Tor Browser. You can find installation instructions for Tor on other operating systems on the same page.
Once it’s installed and launched, the browser should connect automatically to the Tor network. Be warned though, some sites may contain links to illegal services and content. You are responsible for your actions as this information is purely educational.
Besides the shady people, here are a few more users of the deep web:
Journalists and Whistleblowers
Free speech and anti-censorship advocates
Citizens in oppressed regimes who need access to news and information
At the end of the day, it comes down to who you are as a person. Power in the wrong hands can cause trouble. But the same power in the hands of righteous men, can bring change and improve the lives of others. That’s it from us at Learn2Create and always remember: Stay safe, stay curious!
The Matrix Voice development board is a Raspberry Pi add-on you can use to build your own voice assistant.
The Matrix Voice is capable of doing much more than just voice functions because of its myriad of additional sensors. It can detect altitude, temperature, humidity, and motion. The Voice board packs all the features you need into a 3.5 inch disc that mounts directly to a Raspberry Pi computer. With it you’re getting eight dedicated microphones and an FPGA to handle all the algorithms and audio processing.
MATRIX Voice is an open-source VOICE RECOGNITION platform consisting of a 3.14-inches in diameter dev board, with a radial array of 7 MEMS microphones connected to a Xilinx Spartan6 FPGA & 64 Mbit SDRAM with 18 RGBW LED’s & 64 GPIO pins. Providing developers the opportunity to integrate custom voice & hardware-accelerated machine learning technology right onto the silicon. An ESP32 Wi-Fi / BT enabled 32 bit microcontroller version is available. It’s for makers, industrial and home IoT engineers.
The 7 MEMs microphone array on MATRIX Voice allows you to leverage voice recognition in your app creations by using the latest online cognitive services including Microsoft Cognitive Service, Amazon Alexa Voice Service, Google Speech API, Wit.ai and Houndify. You can trigger events based on sound detection, such as receiving a text message when your dog is barking back home.
You can also build your own Amazon’s Alexa using a Raspberry Pi and MATRIX Voice. In the Video, they have used Alexa Voice Services (AVS); the service used by Amazon Echo that allows them to accomplish many of the challenging tasks in the project.
Elon Musk’s super-rocket from SpaceX has taken flight and overshot Mars’ orbit, going further out into the solar system than originally planned.
Just hours after Tuesday’s spectacular launch from Florida of Falcon Heavy, the world’s most powerful space rocket, the billionaire founder of the private spaceflight company SpaceX admitted Starman had been a little heavy on the gas and would travel well beyond the intended target of Mars.
“Third burn successful. Exceeded Mars orbit and kept going to the Asteroid Belt,” Musk said in a tweet that seemed to confirm the final destination of the mission had changed.
The rocket was supposed to have one final engine burn before launching the car out into its final orbit, but it appears the engine was a little too strong. The force of the rocket has shocked and impressed planetary scientists.
Falcon Heavy – Trajectory
After launch, the Tesla cruised through space for a good six hours. This “coast” phase was meant to show off a special orbital maneuver for the US Air Force. Then the rocket completed one final engine burn in space and put the car on its final orbit. It looks like that burn might have happened somewhere over Southern California, as some people in the area started reportingsightings of the rocket igniting in the night sky after 9:30PM ET on Tuesday.
“We’re looking at the issue. The centre core obviously didn’t land on the drone ship,” – Musk
South African-born Musk, the billionaire former CEO of the online money transfer company PayPal, said he had invested more than half a billion dollars to get Falcon Heavy off the ground and hoped the success of its first flight would lead to more competition with private spaceflight rivals, including Blue Origin, owned by the Amazon tycoon Jeff Bezos.
The current private rockets aren’t nearly as capable as rockets we had in the 60s, although more cost effective. The BFR that is coming will be as good as the Saturn V, which did the original lunar landings.
The Azure Bot Framework was introduced two years ago by Microsoft and companies have been building chatbots for a variety of situations ever since. Today, the Microsoft Azure Bot Service and Microsoft Cognitive Language Understanding service (known as LUIS), were made available by the tech giant.
“Making these two services generally available on Azure simultaneously extends the capabilities of developers to build custom models that can naturally interpret the intentions of people conversing with bots,” Lili Cheng, corporate vice president at the MS AI and Research division announced.
Microsoft has created a whole set of tools for developers to create their bots, including the Bot Framework and Cognitive Services. Cheng says flexibility is at the core of the Chatbot service. You don’t even need to host it on Azure if you don’t want to. The Bot service is actually part of a broader set of Azure services Microsoft has created to help developers build applications with artificial intelligence underpinnings.
Cheng said more than 200,000 developers have signed up for the Bot service, and they currently have 33,000 active bots in areas like retail, healthcare, financial services and insurance. Companies building bots with the Microsoft tools include Molson Coors, UPS and Sabre.
“You can build a bot and auto provision on Azure and you can publish on Facebook Messenger, Slack and most of the Microsoft channels [such as] Cortana, Skype and Skype for teams,” – Lili Cheng
You also can embed the bot in a web page or in an app and customize the UI as you see fit. When you combine the bot building tools with the LUIS language understanding tool, you get what should be a powerful combination.
Interested in Azure? Follow BotSpawn to stay updated on the Chatbot revolution!