While we are committed to run in-house research and engineering, we are also excited to collaborate with academic partners to facilitate exploration of new and promising research. The Sony Focused Research Award provides an opportunity for university faculty and Sony to conduct this type of collaborative, focused research. A list of candidate research topics appears below. Advanced Image Processing enabled by AI Recent advances in machine learning have created a paradigm shift for many applications.
Deep Learning Research Directions: Computational Efficiency by Tim Dettmers 15 Comments This blog post looks at the growth of computation, data, deep learning researcher demographics to show that the field of deep learning could stagnate over slowing growth. We will look at recent deep learning research papers which strike up similar problems but also demonstrate how one could to solve these problems.
After discussion of these papers, I conclude with promising research directions which face these challenges head on.
This blog post series discusses long-term research directions and takes a critical look at short-term thinking and its pitfalls. In this first blog post in this series, I firstly will discuss long-term trends of data and computational power by using trends in computing and hardware.
Then we look at the demographics of researchers, and I show that the fraction of researchers that do not have access to powerful Research paper on searching techniques in ai resources is increasing rapidly.
We will also see that compared to specialized techniques, pre-training on more data is just on-a-par with respect to predictive performance. From this, I conclude that more data is only helpful for large companies that have the computational resources to process them and that most researchers should aim for research where the limiting resource is creativity and not computational power.
However, I also show that the future holds ever growing amounts of data, which will make large datasets a requirement. Thus, we need techniques to make it feasible to process more data, but we also we need techniques to make deep learning inclusive for as many researchers as possible, many of whom will come from developing countries.
After the discussion of the core paper, we have a look at possible solutions introduced in four recent papers. These articles aim to overcome these long-term trends by 1 making operations, like convolution, more efficient, 2 by developing smart features so that we can use smaller, fast models, that yield the same results as big, fat, stupid models, 3 how companies with substantial computational resources can use those resources to create research that benefits everyone by searching for new architectures, 4 how we can solve the problem of ever-growing data by pre-selecting the relevant data via information retrieval.
I will conclude by discussing what place these papers have in the long-term research directions in deep learning. The Problem of Short-term Thinking in Deep Learning Research This blog post series aims to foster critical thinking for deep learning research and encourage the deep learning community to pursue research which is critical for the progress of the field.
Currently, an unhealthy hype and herd mentality gained strong traction in the field of deep learning and, in my opinion, a lot of research is becoming more and more short-sighted. The short-sightedness has mostly to do with competitive pressure from increasing number of new students entering the field, pressure from our publish-or-perish culture, and pressure from the publish-on-arXiv-before-you-get-scooped mindset, which favors incomplete research which provides quick gains rather than to advance the deep learning community.
Another problem is that many researchers use Twitter as the primary source for current deep learning research trends, which exacerbates these herd mentality problems: It encourages more of the same, that is, doing that and thinking about that which is popular, and, secondly, it encourages to follow big players and big names rather than a mix of researchers, which leads to single-mindedness.
Twitter is not a discussion forum where one can discuss ideas in depth and come to a conclusion that lets everyone benefit. Twitter is a platform where the big win big, and the small disappear.
If the big make a mistake, everybody in the deep learning community is misled. The thing is that the big make mistakes too. It is like the explore vs exploit problem: If everybody just exploits there will be no discoveries, just incremental advancements — more of the same.
And I would like to believe that the world needs breakthroughs. AI can help us prosper and solve difficult problems, but only if we chose to explore more.
This blog post is no antidote to all of this but aims to give you a nudge in a direction where you analyze research directions with a more critical eye. I hope you leave this blog post thinking about your own direction and how it relates to this long-term picture that I draw here.
The research trends discussed in this blog post series aims to 1 highlight the important but ignored research on the sidelines of deep learning, or 2 raise problems that make very popular deep learning research evidently short-sighted or naive.
I do not try to glorify a rogue mindset here: Being defiant for the sake of being defiant has no merit.
I also do not want to say that all major research directions are garbage: Most popular research is popular because it is important. What I want is to help you feed a critical mindset and long-term thinking. The theme for this blog post is a topic from the category 1it deals with deep learning research which is important, but all so often goes unnoticed: Computational efficiency and the problems that come with data.
Usually an ignored topic, I will analyze trends to outline why this is an important long-term problem that everybody should be concerned about. Indeed, the field of deep learning may stagnate if we do not tackle this problem.
After discussion of these trends, we see current research which exposes the core problems of this research direction. Finally, I will discuss four research papers from the past two months which try to address the raised issues.African AI researchers would like better code switching, maps, to accelerate research: The research needs of people in Eastern Africa tells us about some of the ways in which AI development will differ in that part of the world.
The history of Artificial Intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen; as Pamela McCorduck writes, AI began with "an ancient wish to forge the gods.". The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical.
Moving on to clustering, the text mining research paper “A comparison of document clustering techniques” by Michael Steinbach, George Karypis and Vipin Kumar from the Department of Computer Science at the University of Minnesota provides the foundation .
Dear Twitpic Community - thank you for all the wonderful photos you have taken over the years. We have now placed Twitpic in an archived state.
Resources for Finding and Accessing Scientific Papers When you start your background research, Try searching for the full title of the paper in a regular search engine like Google, Yahoo, or MSN. The paper may come up multiple times, and one of those might be a free, downloadable copy.
Google is a global leader in electronic commerce. Not surprisingly, it devotes considerable attention to research in this area. Topics include 1) auction design, 2) advertising effectiveness, 3) statistical methods, 4) forecasting and prediction, 5) survey research, 6) policy analysis and a host of other topics.