Writing your own Content Store implementation isn't too hard, so producing a HDFS backed one wouldn't take that much time.
Your big issue is that HDFS normally uses quite a large block size by default (IIRC 64mb), so if you're storing small files you'll either want to implement your own way of putting multiple files into one HDFS block, or accept lots of wasted space. Dropping the block size isn't generally recommended, as that puts too much pressure on the name node. If you do store multiple small files in one HDFS block, think carefully about how you do it to ensure that other tools you might run across the files (eg Map Reduce jobs) can easily work with them.