India-born scientist"s Robo Brain is a very fast online learner

August 25, 2014

Robo BrainMumbai, Aug 25: In July, scientists from Cornell University led by Ashutosh Saxena said they have developed Robo Brain—a large computational system that learns from publicly available Internet resources. The system, according to a 25 August statement by Cornell, is downloading and processing about 1 billion images, 120,000 YouTube videos and 100 million how-to documents and appliance manuals.

Information from the system, which Saxena had described at the 2014 Robotics: Science and Systems Conference in Berkeley, is being translated and stored in a robot-friendly format that robots will be able to draw on when needed.

The India-born, Indian Institute of Technology-Kanpur graduate, has now launched a website for the project at robobrain.me, which will display things the brain has learnt, and visitors will be able to make additions and corrections. Like a human learner, Robo Brain will have teachers, thanks to crowdsourcing. “Our laptops and cellphones have access to all the information we want.

If a robot encounters a situation it hasn"t seen before it can query Robo Brain in the cloud,” Saxena, assistant professor, Microsoft Faculty Fellow, and Sloan Fellow, at Cornell University, said in a statement.

Saxena and his colleagues at Cornell, Stanford and Brown universities and the University of California, Berkeley, say Robo Brain will process images to pick out the objects in them, and by connecting images and video with text, it will learn to recognize objects and how they are used, along with human language and behaviour.

His team includes Ashesh Jain, a third-year PhD computer science student at Cornell. Robo Brain employs what computer scientists call structured deep learning, where information is stored in many levels of abstraction.

Deep learning is a set of algorithms, or instruction steps for calculations, in machine learning. For instance, an easy chair is a member of a class of chairs, and going up another level, chairs are furniture.

Robo Brain knows that chairs are something you can sit on, but that a human can also sit on a stool, a bench or the lawn, the statement said.

A robot"s computer brain stores what it has learnt in a form that mathematicians call a Markov model, which can be represented graphically as a set of points connected by lines—called nodes and edges.

The nodes could represent objects, actions or parts of an image, and each one is assigned a probability—how much you can vary it and still be correct.

In searching for knowledge, a robot"s brain makes its own chain and looks for one in the knowledge base that matches within those limits.

“The Robo Brain will look like a gigantic, branching graph with abilities for multi-dimensional queries,” said Aditya Jami, a visiting researcher art Cornell, who designed the large database for the brain. Jami is also co-founder and chief technology officer at Predict Effect, Zoodig Inc. The basic skills of perception, planning and language understanding are critical for robots to perform tasks in the human environments. Robots need to perceive with sensors, and plan accordingly.

If a person wants to talk to a robot, for instance, the robot has to listen, get the context and knowledge of the environment, and plan its motion to execute the task accordingly.

For example, an industrial robot needs to detect objects to be manipulated, plan its motions and communicate with the human operator. A self-driving robot needs to detect objects on the road, plan where to drive and also communicate with the passenger.

Scientists at the lab at Cornell do not manually programme the robots. Instead, they take a machine learning approach by using variety of data and learning methods to train our robots.

“Our robots learn from watching (3D) images on the Internet, from observing people via cameras, from observing users playing video games, and from humans giving feedback to the robot,” the Cornell website reads.

There have been similar attempts to make computers understand context and learn from the Internet.

For instance, since January 2010, scientists at the Carnegie Mellon University (CMU) have been working to build a never-ending machine learning system that acquires the ability to extract structured information from unstructured Web pages.

If successful, the scientists say it will result in a knowledge base (or relational database) of structured information that mirrors the content of the Web. They call this system the never-ending language learner, or NELL.

NELL first attempts to read, or extract facts from text found in hundreds of millions of web pages (plays instrument). Second, it attempts to improve its reading competence, so that it can extract more facts from the Web, more accurately, the following day. So far, NELL has accumulated over 50 million candidate beliefs by reading the Web, and it is considering these at different levels of confidence, according to information on the CMU website.

“NELL has high confidence in 2,348,535 of these beliefs—these are displayed on this website. It is not perfect, but NELL is learning,” the website reads.

We also have IBM, or International Business Machines" Watson that beat Jeopardy players in 2011, and now has joined hands with the United Services Automobile Association (USAA) to help members of the military prepare for civilian life.

In January 2014, IBM said it will spend $1 billion to launch the Watson Group, including a $100 million venture fund to support start-ups and businesses that are building Watson-powered apps using the “Watson Developers Cloud”.

More than 2,500 developers and start-ups have reached out to the IBM Watson Group since the Watson Developers Cloud was launched in November 2013, according to a 22 August blog in the Harvard Business Review.

Comments

Add new comment

  • Coastaldigest.com reserves the right to delete or block any comments.
  • Coastaldigset.com is not responsible for its readers’ comments.
  • Comments that are abusive, incendiary or irrelevant are strictly prohibited.
  • Please use a genuine email ID and provide your name to avoid reject.
Agencies
March 15,2020

Cybercriminals continue to exploit public fear of rising coronavirus cases through malware and phishing emails in the guise of content coming from the Centers for Disease Control and Prevention (CDC) in the US and World Health Organisation (WHO), says cybersecurity firm Kaspersky.

In the APAC region, Kaspersky has detected 93 coronavirus-related malware in Bangladesh, 53 in the Philippines, 40 in China, 23 in Vietnam, 22 in India and 20 in Malaysia. 

Single-digit detections were monitored in Singapore, Japan, Indonesia, Hong Kong, Myanmar, and Thailand. 

Along with the consistent increase of 2019 coronavirus cases comes the incessant techniques cybercriminals are using to prey on public panic amidst the global epidemic, the company said in a statement. 

Kaspersky also detected emails offering products such as masks, and then the topic became more commonly used in Nigerian spam emails. Researchers also found scam emails with phishing links and malicious attachments.

One of the latest spam campaigns mimics the World Health Organisation (WHO), showing how cybercriminals recognise and are capitalising on the important role WHO has in providing trustworthy information about the coronavirus.

"We would encourage companies to be particularly vigilant at this time, and ensure employees who are working at home exercise caution. 

"Businesses should communicate clearly with workers to ensure they are aware of the risks, and do everything they can to secure remote access for those self-isolating or working from home," commented David Emm, principal security researcher.

Some malicious files are spread via email. 

For example, an Excel file distributed via email under the guise of a list of coronavirus victims allegedly sent from the World Health Organisation (WHO) was, in fact, a Trojan-Downloader, which secretly downloads and installs another malicious file. 

This second file was a Trojan-Spy designed to gather various data, including passwords, from the infected device and send it to the attacker.

Comments

Add new comment

  • Coastaldigest.com reserves the right to delete or block any comments.
  • Coastaldigset.com is not responsible for its readers’ comments.
  • Comments that are abusive, incendiary or irrelevant are strictly prohibited.
  • Please use a genuine email ID and provide your name to avoid reject.
Agencies
June 24,2020

New Delhi, Jun 24: The Centre has made it mandatory for sellers to enter the 'Country of Origin' while registering all new products on government e-marketplace (GeM).

The e-marketplace is a special purpose vehicle (SPV) under the Ministry of Commerce and Industry which facilitates the entry of small local sellers in public procurement, while implementing 'Make in India' and MSE Purchase Preference Policies of the Centre.

Accordingly, the ministry said the move has been made to promote 'Make in India' and 'Atma Nirbhar Bharat'.

The provision has been enabled via the introduction of new features on GeM.

Besides the registration process, the new feature also reminds sellers who have already uploaded their products, to disclose their products' 'Country of Origin' details.

The ministry further said that failing to disclose the detail will lead to removal of the products from the e-marketplace.

"GeM has taken this significant step to promote 'Make in India' and 'Aatmanirbhar Bharat'," the ministry said in a statement.

"GeM has also enabled a provision for indication of the percentage of local content in products. With this new feature, now, the 'Country of Origin' as well as the local content percentage are visible in the marketplace for all items. More importantly, the 'Make in India' filter has now been enabled on the portal. Buyers can choose to buy only those products that meet the minimum 50 per cent local content criteria."

In case of bids, the ministry said that buyers can now reserve any bid for a "Class I Local suppliers. For those bids below Rs 200 crore, only Class I and Class II Local Suppliers are eligible to bid, with Class I supplier getting purchase preference".

In addition to this, the Department for Promotion of Industry and Internal Trade (DPIIT) has reportedly called for a meeting with all e-commerce companies such as Amazon and Flipkart to display the country of origin on the products sold on their platform, as well as the extent of value added in India.

Comments

Add new comment

  • Coastaldigest.com reserves the right to delete or block any comments.
  • Coastaldigset.com is not responsible for its readers’ comments.
  • Comments that are abusive, incendiary or irrelevant are strictly prohibited.
  • Please use a genuine email ID and provide your name to avoid reject.
Agencies
January 7,2020

Washington, Jan 7: Facebook will ban deepfake videos ahead of the US elections but the new policy will still allow heavily edited clips so long as they are parody or satire, the social media giant said Tuesday.

Deepfake videos are hyper-realistic doctored clips made using artificial intelligence or programs that have been designed to accurately fake real human movements.

In a blog published following a Washington Post report, Facebook said it would begin removing clips that were edited--beyond for clarity and quality--in ways that "aren't apparent to an average person" and could mislead people.

Clips would be removed if they were "the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic," the statement from Facebook vice-president Monika Bickert said.

However, the statement added: "This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words."

US media noted the new guidelines would not cover videos such as the 2019 viral clip -- which was not a deepfake -- of House Speaker Nancy Pelosi that appeared to show her slurring her words.

Facebook also gave no indication on the number of people assigned to identify and take down the offending videos, but said videos failing to meet its usual guidelines would be removed, and those flagged clips would be reviewed by teams of third-party fact-checkers -- among them AFP.

The news agency has been paid by the social media giant to fact-check posts across 30 countries and 10 languages as part of a program starting in December 2016, and including more than 60 organisations.

Content labeled "false" is not always removed from newsfeeds but is downgraded so fewer people see it -- alongside a warning explaining why the post is misleading.

Comments

Add new comment

  • Coastaldigest.com reserves the right to delete or block any comments.
  • Coastaldigset.com is not responsible for its readers’ comments.
  • Comments that are abusive, incendiary or irrelevant are strictly prohibited.
  • Please use a genuine email ID and provide your name to avoid reject.