Some important changes to Known were merged in over the weekend.

Most notably, (most) external dependencies are now managed and installed via Composer, and not included natively in the repository itself.

This makes updates easier to manage, but it does mean that if you are installing from (or more importantly, upgrading from) the git repository directly, you will need to perform an extra step.

cd /path/to/known; composer install

This is particularly important if you’re upgrading, and your site is a checkout of the git repo.

Some other important changes

Composer was the big one, but there have been a number of other important changes you should probably be aware of, especially if you write code for Known:

Support for IdnoPlugins.local

You can now put your custom plugins in a separate IdnoPlugins.local directory. This works exactly like IdnoPlugins, but gives us a way of nicely separating custom code from package code.

FileSystem interface change

File systems now support storeContent() for storing content to a file from a variable in memory.

This must be implemented by any class that implements this interface, therefore if you maintain a FileSystem plugin, you’ll need to update your code.

SQLite is deprecated

SQLite is not widely used, and supporting it introduces a fair amount of technical debt.

Support is now deprecated, and will be removed in a future release.

The Interplanetary File System (IPFS) is a distributed, peer to peer, file system. It’s pretty cool. So, here’s an experimental plugin that adds backend file system support for this protocol for Known. 

Currently this functions as a drop in replacement for the Known file storage system, along the same lines as the S3 plugin. It’ll store photos, profile pictures, and any other stored data to IPFS instead of on the local file system, or in Mongo (if you’re using Mongo).

Usage 

You’ll need an IPFS server to talk to. For development I installed go-ipfs, so you can use that, or one of the public ones.

Next, copy the IPFS directory to your IdnoPlugins directory, and activate it.

By default, the plugin is set up to talk to localhost but you probably don’t want to do that forever, so update your config.ini as follows:

Replace the values accordingly, but make sure you keep the [IPFS] section header.

Still to do

At the moment, this is a drop in functional replacement for file storage, and doesn’t go into some of the cooler things you can do with Content-Addressable storage.

As pointed out in this ticket, an obvious improvement would be to cache stuff from the image proxy to IPFS (which already takes place), but to directly reference them via their content hash (which doesn’t currently take place), as this should be more efficient.

Anyway, that’s future development and would require some core hooks. I’ll get to that next, I’m sure.

Anyway, kick the tires and let me know your thoughts. Pull requests more than welcome!

» Visit the project on Github...

So, I’ve been quite busy recently.

I’ve made some decisions in my personal life that have resulted in a bit of a change of direction and focus for me. 

It’s been exciting, and has necessarily meant some changes. This has given me the opportunity to “sharpen my tools”, and so I’ve been getting around to playing with a bunch of technologies that have always been on my “weekend project” list, but they never made it up the priority list.

This isn’t directly related to the title of this article, but provides context. Since, as part of one of these projects, I found it necessary to populate a database with the contents of a bunch of rather unwieldy CSV files (within a docker container, so that their contents could be exposed by a NodeJS/Express REST API, but I digress).

It was while reading the various man pages, before launching in to writing a script, that I found this little gem. This meant I could do the import and provisioning of my docker instance straight from the /docker-entrypoint-initdb.d SQL scripts.

First, create your tables in the normal way, defining the data types that are represented in your CSV file. For example:

Into which, as you might expect, you’d want to import a long list of location coordinates from a CSV file structured as follows:

Now, in your SQL, execute the following query:

Which tells MariaDB to read each line from locations.csv into the locations table, skipping over the first line (which contains the header).

This little trick meant I was able to provision my api’s backend quickly and easily, without the need to hack together some arduous import script. 

Hope you find this useful too!