👋 Hi I’m Valentin. Curiosity, backed by a capacity to learn at record speed. This is where I get my drive from. Being curious is my nature and swift learning my weapon of mass productivity:
Just-in-time learning feels like a super power. It’s the meta-skill. You get in front of the problem you don’t know anything about, you search and learn, and suddenly you have a solution.
Learning is about creating mental models. Simplify, remove the noise and get to the essential: How can I get 95% of the output with only 5% of the effort.
“The most important trick to be happy is to realize that happiness is a choice that you make." — Naval Ravikant
Master in Computer Science, 2016
Télécom ParisTech
Preparatory School in Physics and Chemestry, 2013
Lycée Lakanal
What if we could stop wasting most of our development time re-developing the same thing again and again?
Verifiable credentials (VCs) are the translation of physical credentials like a driving license or a degree in the digital world. In practice, it means we are just signed JSON. Let’s see what an ‘hello world’ signed would look like.
Look inside: What’s the promise of the blockchain you believe in? That it will change the world? Or that it will make you rich?
Ce message est sans importance… (mais essentiel)
Qonto is a European neobank for professionals. To improve a higher quality of service for its clients, in 2018 it developed its own “Core Banking System”, meaning that it maintains itself all its clients' accounts and process all their transactions. (Beforehand it was relying on an external partner to do so.) At Qonto, I was part of the Ledger team, which maintains the “source of truth” for all accounts and their transactions. These micro-services were implemented using a Go plus PostgreSQL stack. Moreover, I did multiple interventions to improve and maintain Qonto’s billing system which is implemented using Ruby on Rails.
Qonto uses an advanced micro-service architecture with over 80 services (and counting!) being continuously deployed relying on Gitlab, Kubernetes and Argo CD.
PacketAI aims to develop an IT infrastructure monitoring platform, similar to Datadog and Dynatrace, but equipped with Machine Learning to predict incidents in advance and locate their root cause.
I started when PacketAI had just received its seed funding, with only two other developers. I was able to quickly get a grasp on their stack, and within days of my arrival, I started adding new features to the agent, a software running on the client hosts collecting events and metrics. I designed and developed from scratch all PacketAI microservices, all in Go, plus a Logstash node.
docker-compose
. I was
involved in the development of the CI/CD pipelines of our Go projects
on GitLab.CSRC is a publicly-funded research center within KAIST university. I was free to define the problems I worked on, and figure potential solutions, then develop and design their implementation, and finally test and evaluate these prototypes. This experience allowed me to demonstrate my abstraction ability: find solutions based on principles.
I also made full use of my engineering mind: I completed three large projects. First, a modification of the code of Linux Kernel memory allocation for Drivers (in C). Second, an improvement of the dynamic testing tool of LLVM, a compiler infrastructure project written in C++. This project was merged into the mainline by a team at Google. And lastly, Ankou, my largest project, is a fuzzer I developed from scratch in Go. Ankou found more than a thousand unique crashes in open source projects.
htop
, grep
,
find
, etc…Entropic is an information-theoretic power schedule implemented based on LibFuzzer. It boosts performance by changing weights assigned to the seeds in the corpus. Seeds revealing more “information” are assigned a higher weight.
Entropic received the ACM SIGSOFT Distinguished Paper Award! Furthermore, its code was made the default schedule in LibFuzzer @ LLVM (C++ code base), which powers Google’s OSSFuzz and Microsoft’s OneFuzz 🚀.
Grey-box fuzzing is an evolutionary process, which maintains and evolves a population of test cases with the help of a fitness function. Fitness functions used by current grey-box fuzzers are not informative in that they cannot distinguish different program executions as long as those executions achieve the same coverage. The problem is that current fitness functions only consider a union of data, but not their combination. As such, fuzzers often get stuck in a local optimum during their search. In this paper, we introduce Ankou, the first grey-box fuzzer that recognizes different combinations of execution information, and present several scalability challenges encountered while designing and implementing Ankou. Our experimental results show that Ankou is 1.94× and 8.0× more effective in finding bugs than AFL and Angora, respectively.
This paper surveys both the academic papers and the open-sourced tools in the field of fuzzing. We present a unified, general-purpose model to better understand the design and trade-offs of fuzzers.
Monolithic kernel is one of the prevalent configurations out of various kernel design models. While monolithic kernel excels in performance and management, they are unequipped forruntime system update; and this brings the need for kernel extension. Although kernel extensions are a convenient measure for system management, it is well established that they make the system prone to rootkit attacks and kernel exploitation as they share the single memory space with the rest of the kernel. To address this problem, various forms of isolation (e.g., making into a process), are so far proposed, yet their performance overhead is often too high or incompatible for a general purpose kernel. In this paper, we propose Domain Isolated Kernel (DIKernel), a new kernel architecture which securely isolates the untrustedkernel extensions with minimal performance overhead. DIKernel leverages hardware-based memory domain feature in ARM architecture; and prevents system manipulation attacks originated from kernel extensions, such as rootkits and exploits caused by buggy kernel extensions. We implemented DIKernel on top of Linux 4.13 kernel with 1500 LOC. Performance evaluation indicates that DIKernel imposes negligible overhead which is observed by cycle level microbenchmark.