Reinke, C., Uchibe, E., & Doya, K. (2016). From Neuroscience to Artificial Intelligence: Maximizing Average Reward in Episodic Reinforcement Learning Tasks with an Ensemble of Q-Learners. In the Third CiNet Conference, Neural mechanisms of decision making: Achievements and new directions, Osaka, poster.
Reinke, C., Uchibe, E., & Doya, K. (2016). Learning of Stress Adaptive Habits with an Ensemble of Q-Learners. In The 2nd International Workshop on Cognitive Neuroscience Robotics, Osaka, poster.
Wang, J., Uchibe, E., & Doya, K. (2015). Two-wheeled smartphone robot learns to stand up and balance by EM-based policy hyper parameter exploration. In Proc. of the 20th International Symposium on Artificial Life and Robotics.
Uchibe, E., & Doya, K. (2015). Inverse Reinforcement Learning with Density Ratio Estimation. The 2nd Multidisciplinary Conference on Reinforcement Learning and Decision Making, University of Alberta, Canada, poster.
Reinke, C., Uchibe, E., & Doya, K. (2015). Gamma-QCL: Learning multiple goals with a gamma submodular reinforcement learning framework. In Winter Workshop on Mechanism of Brain and Mind. (poster presentation).
Eren Sezener, C., Uchibe, E., & Doya, K. (2014). Ters Peki,stirmeli Ogrenme ile Farelerin Odul Fonksiyonunun Elde Edilmesi. In Proc. of Turkiye Otonom Robotlar Konferans? (TORK). [published in Turkey, but see also an English version].
Kinjo, K., Uchibe, E., & Doya, K. (2014). Robustness of Linearly Solvable Markov Games with inaccurate dynamics model. In Proc. of the 19th International Symposium on Artificial Life and Robotics.
Wang, J. Uchibe, E., & Doya, K. (2014). Control of Two-Wheeled Balancing and Standing-up Behaviors by an Android Phone Robot. Proc. of the 32nd Annual Conference of Robotics Society of Japan, Kyushu Sangyo University, 2014.
Sakuma, T., Shimizu, T., Miki, Y., Doya, K., & Uchibe, E. (2013). Computation of Driving Pleasure based on Driver's Learning Process Simulation by Reinforcement Learning. In Proc. of Asia Pacific Automotive Engineering Conference.
Wang, J., Uchibe, E., & Doya, K. (2013). Standing-up and Balancing Behaviors of Android Phone Robot. In Proc. of IEICE-NLP2013-122, 49-54.
Uchibe, E., Ota, S., & Doya, K. (2013). Inverse Reinforcement Learning for Analysis of Human Behaviors. The 1st Multidisciplinary Conference on Reinforcement Learning and Decision Making, Princeton, New Jersey, USA, poster.
Ota, S., Uchibe, E., & Doya, K. (2013). Analysis of human behaviors by inverse reinforcement learning in a pole balancing task. The 3rd International Symposium on Biology of Decision Making, Paris, France, poster.
2011
Uchibe, E., & Doya, K. (2011). Evolution of rewards and learning mechanisms in Cyber Rodents. J. K. Krichmar and H. Wagatsuma (eds.), Neuromorphic and Brain-Based Robotics, chapter 6, 109-128.
Morimura, T., Uchibe, E., Yoshimoto, J., & Doya, K. (2009). A Generalized Natural Actor-Critic Algorithm. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, & A. Culotta (Eds.), Advances in Neural Information Processing Systems 22 (pp. 1312-1320). MIT Press.
Morimura, T., Uchibe, E., Yoshimoto, J., & Doya, K. (2008). A New Natural Policy Gradient by Stationary Distribution Metric. In Proc. of European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (pp. 82-97). Springer Berlin / Heidelberg.
Brunskill, E., Uchibe, E., & Doya, K. (2006). Adaptive state space construction with reinforcement learning for robots. poster presentation in Proc. of the International Conference on Robotics and Automation.
Morimura, T., Uchibe, E., & Doya, K. (2005). Utilizing the natural gradient in temporal difference reinforcement learning with eligibility traces. In Proc. of the 2nd International Symposium on Information Geometry and its Application (pp. 256-263).
Uchibe, E., & Doya, K. (2004). Competitive-Cooperative-Concurrent Reinforcement Learning with Importance Sampling. In S. Schaal, A. Ijspeert, A. Billard, S. Vijayakumar, J. Hallam, & J.-A. Meyer (Eds.), Proc. of the Eighth International Conference on Simulation of Adaptive Behavior: From Animals to Animats 8 (pp. 287?296). MIT Press, Cambridge, MA.