Modeling documents with Generative Adversarial Networks
The code required to replicate the experiments from my work on using Generative Adversarial Networks to learn distributed representations of documents in an unsupervised manner. There is also an overview blog post here. Code on Github: https://github.com/johnglover/adversarial-document-model.
An introduction to Generative Adversarial Networks
Most of the work that I did at SoundCloud was proprietary, so not many links to share. On the Data team we worked primarily with large scale (distributed) systems such as Hadoop, Cassandra, RabbitMQ and Elasticsearch. For an overview of one of the systems that I worked on, see: Stitch - A real-time counting service.
One part of our log/event pipeline is open source: Barn, a tool for shipping log files to HDFS.
At Open Knowledge
- CKAN: the leading open-source data portal platform, making it easy to publish, share, find and use data.
For more see http://github.com/okfn.
Metamorph is an open source library for performing high-level sound transformations based on a sinusoids plus noise plus transients model. It is written in C++, can be built as both a Python extension module and a Csound opcode, and currently runs on Mac OS X and Linux. It is designed to work primarily on monophonic, quasi-harmonic sound sources and can be used in a non-real-time context to process pre-recorded sound files or can operate in a real-time (streaming) mode.
Metamorph is available under the terms of the GNU General Public License (GPL).
For download and installation information, go to http://github.com/johnglover/metamorph.
Simpl is an open source library for sinusoidal modelling written in C/C++ and Python, and making use of Scientific Python (SciPy). It is primarily intended as a tool for other researchers in the field, allowing them to easily combine, compare and contrast many of the published analysis/synthesis algorithms.
For download and installation information, go to http://simplsound.sourceforge.net.
The project summary page is available at http://sourceforge.net/projects/simplsound.
Modal is an open source (GPL), cross-platform library for musical onset detection written in C++ and Python. It contains implementations of several onset detection functions from the literature as well as a number of new onset detection functions that were created as part of my Ph.D. research.
Modal also contains a free collection of samples together with hand-annotated onset locations, all with creative commons licensing allowing for free reuse and redistribution.
It is available at http://github.com/johnglover/modal
Implementations of the automatic note segmentation techniques proposed by Caetano et al. and Glover et al. discussed in the paper: "Real-Time Segmentation of the Temporal Evolution of Musical Sounds", Glover, Lazzarini and Timoney, Proceedings of Acoustics 2012 Hong Kong Conference.
It is available at http://github.com/johnglover/notesegmentation
Libsms in an open source C library that implements SMS techniques for the analysis, transformation and synthesis of musical sounds based on a sinusoidal plus residual model. It is derived from the original code of Xavier Serra, as part of his PhD thesis.
The libsms project was started by Rich Eakin while at the Music Technology Group (MTG) in Barcelona. I worked with Rich on the project as part of an EU COST programme at the MTG in 2009 and have continued to work on it since. The current release of libsms is available at: http://mtg.upf.edu/static/libsms
My own working copy can be found at: http://github.com/johnglover/libsms
ALife on Asteroids
ALife on Asteroids was a project that I worked on with the Advanced Concepts Team at the European Space Agency as part of the 2010 Google Summer of Code. It was developed as part of their PaGMO project.
The goal was to use genetic algorithms to evolve a walking behaviour for a robot in a low gravity environment. This was achieved by first building a 3D model of a robot and putting it into a virtual world, using Open Dynamics Engine to simulate rigid body dynamics. Feedback and control information for each limb of the robot was then passed to a recurrent neural network, created using the PyBrain module. Finally, PaGMO was used to find suitable values for the weights of the neural network that were controlling the robot.
Other Open Source Projects
I have also contributed to the following open source audio projects: