The awesome stuff Thoughts, stories and ideas.

Building QMK firmware for my dz60rgb

Recently I have build my very first mechanical keyboard. One of my favorite parts is that it supports QMK firmware. Building the firmware is very easy.

Start with cloning the firmware code:

git clone https://github.com/qmk/qmk_firmware.git

First install the default compilers:

util/macos_install.sh

This will install homebrew and the compilers.

Next steps is to create your own keyboard settings.
In the directory keyboards you can find your hardware and subdirectories for any variantions. For me it is keyboards/dztech/dz60rgb

Lets create your own setup:

mkdir keyboards/dztech/dz60rgb/keymaps/logic855
copy keyboards/dztech/dz60rgb/keymaps/ansi/* keyboards/dztech/dz60rgb/keymaps/logic855

You will have two files now in your personal directory:
config.h for the settings you want to overwrite, like disabling rgb.
keymap.c is for configuring all the keys and layers.

Compiling now is simple:

make dztech/dz60rgb:logic855

Flashing your keyboard works as follows:
For the dz60rgb you need to keep the esc key pressed while connecting the usb.
This will put the keyboard in flash mode, so keys wont work anymore, find a second keyboard or be smart with the timing when pressing enter, tip: do a make clean first, compiling takes time ;)

make dztech/dz60rgb:logic855:dfu-util

Bam, where done :)

Next post, write some things about the keymap.c :D

2018 predictions

January February is always a good starting point to reflect on what you think this year is going to happen.

The web

Since the beginning of the internet we have seen some major milestones on different levels.
When I started in the ICT we had the first iteration: web1.0->web2.0
The old skool internet altavista, yahoo, simple html pages were replaced by asyncronous calls with the google gmail website as a good example. Wikipedia and facebook brought the more social aspects to life. Slowly we moved towards web 3.0 with small and adjustable applications that together are a service for flexibility and scalability.
Some say the 4.0 will be about 'facial recognition', but only the future will answer that.

For running services we can say the same thing happened.
In the good old days we would have multiple servers and manage them one by one. Than Cloud 1.0 happened. Why run it on your machines if other people can do all the maintenance for you, costs efficient and having proper reliability. But the application services themselves rarely changed. We only replaced the hardware for other hardware. And used other peoples databases instead of our owns. With Docker/Zones and Jails it slowly started to change. People added services in a container and became less depended.

We are now at the milestone 'Cloud 2.0'

Now we want to do more than store our data and run the apps in the cloud. With the cloud and machine learning-based analytics tools, enterprises now are in a better position to use data for their benefits, data lakes and data streams are now very popular. And why have a server when you can put your apps in a "container". 2.0 is about running serverless and only run your app when it is needed. making it also costs efficient and scalable.

Embrace the rise of the machines

In order to run all those services, we have to be less depended on humans. There are more and more services running at the same. Why not let a computer control those services. Bridge the gap with machine learning and automations. With Kubernetes we have already started to bring down the manual interaction. But why not also use machine learning for parsing logs and making machines scale of specific times when it's actually needed. "Be ahead of the stream".

Heterogeneous cloud

A heterogeneous cloud integrates components by many different vendors into one single tool. Applications dont care anymore where they run. They are no more dependend on cloud specific services (like sns/sqs). We are seeing now more and more that services are moving from one cloud provider to another and back depending on costs. We need to make it as simple as possible to transition between those.

Governance as code

With the amount of services it becomes much more complex on making sure that the implementation follows your Architecture rules. Some find security more important, others are more focused on performance. It will be no longer possible to manually monitor and check the health of the infrastructe and applications according to the rules. An interesting topic about this is from Neal Ford: Evolutionary Architectures

Delegating a zfs dataset

Delegating a zfs dataset

I love SmartOS but unfortunately delegating a dataset to one of your SmartOS or LX-branded zones is not supported with vmadm. It is possible tho with zonecfg, the old way of using zfs and zones.

Make sure you stop the zone you want to add the dataset:
# vmadm halt 5b297ee0-e9ad-c834-d4b8-a4e75fd38c62

Lets create a zfs dataset:
# zfs create zones/data

Now lets edit the config:
# zonecfg -z 5b297ee0-e9ad-c834-d4b8-a4e75fd38c62

And do the following:

zonecfg:5b297ee0-...> add dataset
zonecfg:5b297ee0-...:dataset> set name=zones/data
zonecfg:5b297ee0-...:dataset> end
zonecfg:5b297ee0-...> verify
zonecfg:5b297ee0-...> exit

After that you can start you zone again and will have the zfs dataset mounted in the zone

You can also check it with vmadm:

# vmadm get 5b297ee0-e9ad-c834-d4b8-a4e75fd38c62 | json datasets
[
  "zones/data"
]

Technology radar

vi·sion·ar·y

thinking about or planning the future with imagination or wisdom

Introduction

3 weeks ago I was at the architecture conference in London. I did a 2 day training about the "Fundamentals of Architecture" by Mark Richards.
And one of the things he tought was: As an architect you need to be aware of your surroundings. It is an important lesson to not stay inside your bubble. As an example, if you only use Mysql you will miss out on the nice features that NoSQL brings you.

Does this means you need to know everything of everything? no of course not, but at least have an understanding of the techniques (like microservices) or products. Than the questions comes, what is everything? there is a lot out there in the world. The help you prioritize there is a nice tool or technique which is a "Technology Radar".

Technology radar

The Technology radar is a living document with Frameworks, tools, platforms and techniques, to give you a good overview on what you need to focus on for the upcoming months. I am going to update this every 3 months, there is only so much you can do with your time :)

The radar has 4 rings:

  • Hold: If you have it, try to phase it out or put it on hold
  • Asses: a technology worth exploring with the goal of understanding it and how it will affect you
  • Trial: technologies you (and/or colleagues) have decided are worth pursuing. Invest more time into it so you have a full understanding of it
  • Adopt: Items you should adopt. The no brainers you should have. Docker is a nice example

Continuation

Now that you know the basics, invest some time to create a nice radar. Dont go overboard. Focus on the first three months. After that revisit the radar and move and/or introduce new items.

Why I like it

I like it because it gives me focus and clearity on what I need to spend my time on and I know about tools that are upcoming.

My current radar can be found here.
And a more indepth post about this concept is on the thoughworks website.

Offline npm packages

local-npm is a Node server that acts as a local npm registry. It serves modules, caches them, and updates them whenever they change.

Last weekend I was on a plane flying to Lisbon and sometimes its handy to have a local cache of npm packages. This could be being on a flight or during a workshop. Getting this working is pretty simple.

local-npm

Using local-npm is like using a local npm mirror without a complete replication. It's not as big as sinopia where you can also have private repo's.

Your npm installs are fetched from npmjs and then modules and their deps are stored in a local PouchDB. It also takes care of keeping modules updated when they change.

To get local-npm installed, run:

# npm install -g local-npm
# local-npm

local-npm replicates the skimdb part of the database and start replicating. You don't have to wait for it to be at 100%, you can already start using the registry. It will fallback to the online version for any library that isn't in it's proxy yet.

To complete setup, you need to set npm to point to the local server local-npm:

# npm set registry http://127.0.0.1:5080

The local-npm's server setup also has a simple UI for browsing cached modules and searching for them. You can access this at http://localhost:5080/_browse.

npmrc

Switch between different .npmrc files with ease and grace

It's handy to have a local proxy, but sometimes its easier to just use a live one or use the private repo from work.
If that's the case you know it's annoying to do the switching between a bunch of different .npmrc files and manually managing symlinks.

npmrc is a handy tool to save the day. It will switch your .npmrc with a specific named version.

Installation:

# npm install -g npmrc

First time you run it, it will create the links:

# npmrc
Initialising npmrc...
Creating /Users/leon/.npmrcs
Making /Users/leon/.npmrc the default npmrc file
Activating .npmrc "default" 

Lets create the local-proxy profile:

# npmrc -c local-proxy
Removing old .npmrc (default)
Activating .npmrc "proxy"

A blank profile will be created. To point your profile to the local registry and start using it:

# npm set registry http://127.0.0.1:5080

and switching back to the default is as simple as just using the alias:

# npmrc default

It also has some nice generic npmjs repos as an alias:

# npmrc -r eu
Using http://registry.npmjs.eu/ registry.
For more options check the help.

date: 24 March 2016

Newer Posts Older Posts