The most up-to-date list of publications can be found on my Google Scholar page.
My extended CV is here.
Thesis

Articles
Intrinsically Motivated Open-Ended Learning
- Colas, C., Karch, T., Moulin-Frier, C. & Oudeyer, P. Y. (2022). Vygotskian Autotelic Artificial Intelligence: Language and Culture Internalization for Human-Like AI. In review. Slides.
- Akakzia, A., Serris, O., Sigaud, 0. & Colas, C. (2022). Help Me Explore: Minimal Social Interventions for Graph-Based Autotelic Agents. In review. Code.
- Sigaud, O., Caselles-Dupré, H., Colas, C., Akakzia A., Oudeyer, P-Y. & Chetouani, M. (2021). Towards Teachable Autotelic Agents.
In review.
- Colas, C., Karch, T., Sigaud, O. & Oudeyer, P-Y. (2021). Intrinsically Motivated Goal-Conditioned Reinforcement Learning: a Short Survey. Accepted to JAIR.
- Akakzia A., Colas, C., Oudeyer, P-Y., Chetouani, M. & Sigaud, O. (2020). Grounding Language to Autonomously-Acquired Skills via Goal Generation. Accepted at ICLR 2021. Code.
- Portelas, R., Colas, C., Weng, L., Hofmann, K., Oudeyer, P. Y. (2020). Automatic Curriculum Learning For Deep RL: A Short Survey.
Accepted at IJCAI 2020. Talk.
- Colas, C., Karch, T., Lair, N., Dussoux, J. M., Moulin-Frier, C., Dominey, P. F., & Oudeyer, P. Y. (2020). Language as a Cognitive Tool to Imagine Goals in Curiosity-Driven
Exploration. Accepted at NeurIPS 2020. Talk. Code.
- Lair, N., Colas, C., Portelas, R., Dussoux, J. M., Dominey, P. F., & Oudeyer, P. Y. (2019). Language Grounding through Social Interactions and Curiosity-Driven
Multi-Goal Learning. Accepted at Visually Grounded Interaction and Language NeurIPS workshop, 2019.
- Portelas, R., Colas, C., Hofmann, K., & Oudeyer, P. Y. (2019). Teacher Algorithms for Curriculum Learning of Deep RL in Continuously Parameterized Environments. Accepted
at CoRL 2019. Code.
- Colas, C., Sigaud, O., Oudeyer, P. Y. (2018). CURIOUS: Intrinsically Motivated Modular Multi-Goal Reinforcement Learning. Accepted at
ICML 2019. Video. Talk. Code.
- Fournier, Colas, C., Chetouani, M., & P., Sigaud, O. (2019). CLIC: Curriculum Learning and Imitation for feature Control in non-rewarding environments. Accepted at IEEE Transactions on Cognitive and Developmental Systems.
- Colas, C., Sigaud, O., Oudeyer, P.. (2018). GEP-PG: Decoupling Exploration and Exploitation in Deep Reinforcement Learning Algorithms.
Accepted at ICML 2018. Talk. Code
Optimization and Epidemiology
- Colas, C., Hejblum, B., Rouillon, S., Thiébaut, R., Oudeyer, P-Y., Moulin-Frier, C. & Prague, M. (2020). EpidemiOptim: A Toolbox for the Optimization of Control
Policies in Epidemiological Models. Accepted to JAIR. Slides. Demo. Code.
Evolutionary Computation
Statistics for RL
- Colas, C., Sigaud, O., Oudeyer, P. Y. (2019). A Hitchhiker’s Guide to Statistical Comparisons of Reinforcement Learning Algorithms. Code.
- Colas, C., Sigaud, O., Oudeyer, P. Y. (2018). How Many Random Seeds? Statistical Power Analysis in Deep Reinforcement Learning Experiments..
Brain-Computer Interfaces
Digital Art