"Learn with Google AI" comes with existing content as well as the new Machine Learning Crash Course (MLCC).
"We believe it's important that the development of AI reflects as diverse a range of human perspectives and needs as possible. So, Google AI is making it easier for everyone to learn ML by providing a huge range of free, in-depth educational content," Zuri Kemp, Programme Manager for Google's machine learning education, said in a statement.
"This is for everyone -- from deep ML experts looking for advanced developer tutorials and materials, to curious people who are ready to try to learn what ML is in the first place," Kemp added.
The course features videos from ML experts at Google, interactive visualisations illustrating ML concepts, coding exercises using cutting-edge TensorFlow APIs and a focus that teaches how practitioners implement ML in the real world.
Originally developed by Google's engineering education team, more than 18,000 Googlers have enrolled in MLCC so far.
The new AI method utilises past experience to become better and faster at solving new instances of the problem.
"Our method was used to design the next generation of Google's artificial intelligence (AI) accelerators, and has the potential to save thousands of hours of human effort for each new generation," the team wrote in a paper appeared in the scientific journal Nature.
"Finally, we believe that more powerful AI-designed hardware will fuel advances in AI, creating a symbiotic relationship between the two fields", they noted.
In about six hours, the model could generate a design that optimises the placement of different components on the chip.
To achieve this, the Google team used a dataset of 10,000 chip layouts for a machine-learning model, which was then trained with reinforcement learning.
"Our RL (reinforcement learning) agent generates chip layouts in just a few hours, whereas human experts can take months," Anna Goldie, research scientist at Google Brain, who took part in the research, said in a tweet.
"These superhuman AI-generated layouts were used in Google's latest AI accelerator (TPU-v5)!" She added.
Google has used the model to design its next generation of tensor processing units (TPUs), which run in the company's data centres to enhance the performance of various AI applications.
Chip floor-planning is the engineering task of designing the physical layout of a computer chip.
Despite five decades of research, chip floor planning has defied automation, requiring months of intense effort by physical design engineers to produce manufacturable layouts.
"In under six hours, our method automatically generates chip floor plans that are superior or comparable to those produced by humans in all key metrics, including power consumption, performance and chip area," according to the Google AI team.
In the study, to be published in the journal Ophthalmology, the researchers created a system which not only improved the ophthalmologists' diagnostic accuracy but also improved the algorithm's accuracy.
The study expands on previous work from Google AI showing that its algorithm works roughly as well as human experts in screening patients for a common diabetic eye disease called diabetic retinopathy.
"What we found is that AI can do more than simply automate eye screening, it can assist physicians in more accurately diagnosing diabetic retinopathy. AI and physicians working together can be more accurate than either one alone," said lead researcher Rory Sayres.
Recent advances in AI promise to improve access to diabetic retinopathy screening and to improve its accuracy. But it's less clear how AI will work in the physician's office or other clinical settings, the team said.
According to the team, previous attempts to use computer-assisted diagnosis shows that some screeners rely on the machine too much, which leads to repeating the machine's errors, or under-rely on it and ignore accurate predictions.
The research team at Google AI believes that some of these pitfalls may be avoided if the computer can "explain" its predictions.
To test this theory, ten ophthalmologists (four general ophthalmologists, one trained outside the US, four retina specialists, and one retina specialist in training) were asked to read images with and without algorithm assistance.
Without assistance, general ophthalmologists are significantly less accurate than the algorithm, while retina specialists are not significantly more accurate than the algorithm.
With assistance, general ophthalmologists match but do not exceed the model's accuracy, while retina specialists start to exceed the model's performance.
Reading mammograms is a difficult task, even for experts, and can often result in both false positives and false negatives.
In turn, these inaccuracies can lead to delays in detection and treatment, unnecessary stress for patients and a higher workload for radiologists who are already in short supply, Google said in a blog post on Wednesday.
Google's AI model spotted breast cancer in de-identified screening mammograms (where identifiable information has been removed) with greater accuracy, fewer false positives and fewer false negatives than experts.
"This sets the stage for future applications where the model could potentially support radiologists performing breast cancer screenings," said Shravya Shetty, Technical Lead, Google Health.
Digital mammography or X-ray imaging of the breast, is the most common method to screen for breast cancer, with over 42 million exams performed each year in the US and the UK combined.
"But despite the wide usage of digital mammography, spotting and diagnosing breast cancer early remains a challenge," said Daniel Tse, Product Manager, Google Health.
Together with colleagues at DeepMind, Cancer Research UK Imperial Centre, Northwestern University and Royal Surrey County Hospital, Google set out to see if AI could support radiologists to spot the signs of breast cancer more accurately.
The findings, published in the journal Nature, showed that AI could improve the detection of breast cancer.
Google AI model was trained and tuned on a representative data set comprised of de-identified mammograms from more than 76,000 women in the UK and more than 15,000 women in the US, to see if it could learn to spot signs of breast cancer in the scans.
The model was then evaluated on a separate de-identified data set of more than 25,000 women in the UK and over 3,000 women in the US.
"In this evaluation, our system produced a 5.7 per cent reduction of false positives in the US, and a 1.2 per cent reduction in the UK. It produced a 9.4 per cent reduction in false negatives in the US, and a 2.7 per cent reduction in the UK," informed Google.
The researchers then trained the AI model only on the data from the women in the UK and then evaluated it on the data set from women in the US.
In this separate experiment, there was a 3.5 per cent reduction in false positives and an 8.1 per cent reduction in false negatives, "showing the model's potential to generalize to new clinical settings while still performing at a higher level than experts".
Notably, when making its decisions, the model received less information than human experts did.
The human experts (in line with routine practice) had access to patient histories and prior mammograms, while the model only processed the most recent anonymized mammogram with no extra information.
Despite working from these X-ray images alone, the model surpassed individual experts in accurately identifying breast cancer.
This work, said Google, is the latest strand of its research looking into detection and diagnosis of breast cancer, not just within the scope of radiology, but also pathology.
"We're looking forward to working with our partners in the coming years to translate our machine learning research into tools that benefit clinicians and patients," said the tech giant.
(IANS)
The results, published in the journal Nature Medicine, suggest that the AI model, built in partnership with Alphabet subsidiary DeepMind and Moorfields Eye Hospital in Britain, could help doctors study preventive treatments for age-related macular degeneration (AMD), the third-largest cause of blindness across the globe.
Around 75 per cent of patients with AMD have an early form called "dry" AMD that usually has relatively mild impact on vision.
A minority of patients, however, develop the more sight-threatening form of AMD called exudative, or "wet" AMD.
This condition affects around 15 per cent of patients, and occurs when abnormal blood vessels develop underneath the retina.
These vessels can leak fluid, which can cause permanent loss of central vision if not treated early enough.
Wet AMD often affects one eye first, so patients become heavily reliant upon their unaffected eye to maintain their normal day-to-day living.
Unfortunately, 20 per cent of these patients will go on to develop wet AMD in their other eye within two years.
The condition often develops suddenly but further vision loss can be slowed with treatments if wet AMD is recognised early enough.
The new research showed that the Google Health AI model has the potential to predict whether a patient will develop wet AMD within six months.
The researchers trained their model using a retrospective, anonymised dataset of 2,795 patients.
These patients had been diagnosed with wet AMD in one of their eyes, and were attending one of seven clinical sites for regular 3D optical coherence tomography (OCT) imaging and treatment.
For each patient, the researchers worked with retinal experts to review all prior scans for each eye and determine the scan when wet AMD was first evident.
The AI system is composed of two deep convolutional neural networks, one taking the raw 3D scan as input and the other taking a segmentation map outlining the types of tissue present in the retina.
It used the raw scan and tissue segmentations to estimate a patient's risk of progressing to wet AMD within the next six months.
In the future, this system could potentially help doctors plan studies of earlier intervention, as well as contribute more broadly to clinical understanding of the disease and disease progression.
(IANS)
The more advanced AI-based systems like BERT-based language capabilities can understand more complex, natural-language queries.
However, when it comes to high-quality, trustworthy information, even with its advanced information understanding capabilities, Google do not understand content the way humans do.
Instead, search engines largely understand the quality of content through what are commonly called "signals."
"For example, the number of quality pages that link to a particular page is a signal that a page may be a trusted source of information on a topic," said Danny Sullivan, Public Liaison for Google Search.
Google has more than 10,000 search quality raters, people who collectively perform millions of sample searches and rate the quality of the results.
The company has also made it easy to spot fact checks in Search, News and in Google Images by displaying fact check labels.
"These labels come from publishers that use ClaimReview schema to mark up fact checks they have published," Sullivan said in a blog post.
Sullivan, however, admitted that Google systems aren't always perfect.
"So if our systems fail to prevent policy-violating content from appearing, our enforcement team will take action in accordance with our policies," he said.
Google is working closely with Wikipedia to detect and remove vandalism that it may use in knowledge panels.
The search engine giant is now able to detect breaking news queries in a few minutes versus over 40 minutes earlier
(IANS)
Also Read:
Google Performs 1,000 Tests Daily To Ensure Quality In Search
Google Introduces ‘Verified Calls’ To Protect People From Scammers