So complex are patterns and variations in the vein structures of leaves that botanists struggle to take advantage of them when trying to classify a specimen within the plant kingdom. A new study shows that computer vision technology can provide automated assistance by “learning” how to use venation to assign leaves to their proper family and order.

PROVIDENCE, R.I. [Brown University] — About 80 percent of all the world’s green plants – some 300,000 species – are those that flower, making up a vast division of the plant kingdom known as angiosperms. Given an isolated leaf, especially if preserved as a fossil, botanists can have a difficult time figuring out where it fits into the division. A new study in the Proceedings of the National Academy of Sciences suggests that computers could be a huge help.

In the paper co-authored by Brown University computer vision expert Thomas Serre, researchers “trained” a machine-learning algorithm on a set of nearly 7,600 digital images of leaves that had been chemically treated to emphasize their shape and venation. The software discerned relevant patterns so well from that set of examples that it went on to identify the family of novel leaf images with greater than 70 percent accuracy (a rate 13 times better than chance) and the order with about 60 percent accuracy.

Study lead author Peter Wilf of Penn State University said that for the Serre group's algorithms to identify family or order is “an incredible achievement.” To make such classifications, the software had to come to “understand” that despite wide variations among a great many species, there were nevertheless unifying characteristics that meant that some leaves belonged to some distinct broader groups (families and orders) while other leaves belonged in others.

“Families and orders represent many thousands of species each, with incredible variation among the species, far beyond what botanists have been able to describe using the standard methods,” said Wilf, a paleobotanist.

Moreover, the software visually highlighted the subtle venation features that it used to make its classifications, providing botanists with new ideas of relevant traits to consider.

“Along with the demonstration that computers can recognize major clades of angiosperms from leaf images and the promising outlook for computer-assisted leaf classification, our results have opened a tap of novel, valuable botanical characters,” the authors wrote in PNAS.

Technology advances science

In his work at Brown, Serre, an assistant professor of cognitive, linguistic and psychological sciences, studies how the brain accomplishes visual perception with the goal of modeling it in computers. In studying vision both in biology and technology he has produced insights into psychology and applied technology to solve research problems. In 2010, for example, he unveiled a system for the automated monitoring of mouse behavior that has proved useful in biology studies at Brown and beyond, saving researchers enormous amounts of labor.

The new study began when Wilf invited Serre to apply computer vision to botany after reading a publication derived from Serre’s doctoral work on computerized image classification in 2007. Wilf’s hope was that computers could help botanists sort through massive collections of leaf fossils to determine how they may be related to modern species.

To create thousands of leaf images used in the study, Wilf’s team worked for years to digitize and vet the collection, derived from the specimen holdings of the Smithsonian Institution and elsewhere.

Serre said he was excited to contribute to a novel example in which computer vision technology can aid scientific research (computer vision has been applied to leaf classification before, but it has only attempted species classification and typically relied on leaf shape). He said he has begun to strike up collaborations with Brown plant scientists such as Andrew Leslie, assistant professor of ecology and evolutionary biology, to see how else machine vision could help the field.

“I think it can change the way we do science,” Serre said. “We can do things with computer vision that would be simply impossible if we were to rely on human annotations.”

At Brown, Serre worked with former postdoc Shengping Zhang on the study. Other authors are Sharat Chikkerur of Microsoft, Stefan Little of Penn State and Scott Wing of the Smithsonian Institution.