The Interplanetary File System (IPFS) is a distributed, peer to peer, file system. It’s pretty cool. So, here’s an experimental plugin that adds backend file system support for this protocol for Known. 

Currently this functions as a drop in replacement for the Known file storage system, along the same lines as the S3 plugin. It’ll store photos, profile pictures, and any other stored data to IPFS instead of on the local file system, or in Mongo (if you’re using Mongo).

Usage 

You’ll need an IPFS server to talk to. For development I installed go-ipfs, so you can use that, or one of the public ones.

Next, copy the IPFS directory to your IdnoPlugins directory, and activate it.

By default, the plugin is set up to talk to localhost but you probably don’t want to do that forever, so update your config.ini as follows:

Replace the values accordingly, but make sure you keep the [IPFS] section header.

Still to do

At the moment, this is a drop in functional replacement for file storage, and doesn’t go into some of the cooler things you can do with Content-Addressable storage.

As pointed out in this ticket, an obvious improvement would be to cache stuff from the image proxy to IPFS (which already takes place), but to directly reference them via their content hash (which doesn’t currently take place), as this should be more efficient.

Anyway, that’s future development and would require some core hooks. I’ll get to that next, I’m sure.

Anyway, kick the tires and let me know your thoughts. Pull requests more than welcome!

» Visit the project on Github...

So, I’ve been quite busy recently.

I’ve made some decisions in my personal life that have resulted in a bit of a change of direction and focus for me. 

It’s been exciting, and has necessarily meant some changes. This has given me the opportunity to “sharpen my tools”, and so I’ve been getting around to playing with a bunch of technologies that have always been on my “weekend project” list, but they never made it up the priority list.

This isn’t directly related to the title of this article, but provides context. Since, as part of one of these projects, I found it necessary to populate a database with the contents of a bunch of rather unwieldy CSV files (within a docker container, so that their contents could be exposed by a NodeJS/Express REST API, but I digress).

It was while reading the various man pages, before launching in to writing a script, that I found this little gem. This meant I could do the import and provisioning of my docker instance straight from the /docker-entrypoint-initdb.d SQL scripts.

First, create your tables in the normal way, defining the data types that are represented in your CSV file. For example:

Into which, as you might expect, you’d want to import a long list of location coordinates from a CSV file structured as follows:

Now, in your SQL, execute the following query:

Which tells MariaDB to read each line from locations.csv into the locations table, skipping over the first line (which contains the header).

This little trick meant I was able to provision my api’s backend quickly and easily, without the need to hack together some arduous import script. 

Hope you find this useful too!

Well, it has been a long time coming, but I’m delighted to report that you can now help out the Known project, in a big way, by joining our OpenCollective!

By joining us at OpenCollective, you can help fund the project. Help us keep the lights on, and help us spend more time building the software that you love.

So, if you find Known useful, I strongly encourage you to sign up and contribute!