The awesome stuff smartos

What happened to 2021Q1 and 2021Q2 smartos images?

If you do a imgadm avail you will see that the latest LTS image of smartos is :

800db35c-5408-11eb-9792-872f658e7911  minimal-64-lts   20.4.0 smartos zone-dataset  2021-01-11
1d05e788-5409-11eb-b12f-037bd7fee4ee  base-64-lts   20.4.0 smartos zone-dataset  2021-01-11
188ee9ce-540a-11eb-9cc1-2748cd10e5e2  pkgbuild-lts   20.4.0 smartos zone-dataset  2021-01-11

I was wondering what happened to the rest of the images? We used to see them every quarter. Jonathan Perkin answered the question on the mailinglist (https://smartos.topicbox.com/groups/smartos-discuss/Tf17bc027dd6f9cba-M7dedf357e2bb4ca48d8065a5):

I stopped producing the non-LTS quarterly releases.  They weren't all 
that useful (in my opinion), as users are better served running either 
LTS if they want a static set of packages with the occasional security 
fix, or trunk if they just want the latest and most secure software.  

It's also hard to justify spending time and resources on them now that 
JPC is no more (so my available hardware is significantly reduced) and 
I'm no longer working on pkgsrc full-time.

Simply change your pkgin repo to the trunk version if you want the latest and greatest:

edit /opt/local/etc/pkgin/repositories.conf and replace the :

/packages/SmartOS/2020Q4/x86_64/All -> /packages/SmartOS/trunk/x86_64/All

and run a pkgin upgrade

SmartOS: hash mismatch

Last week I had a small power outage and I was able to properly shutdown my smartos server.I thought when the power returns, it would be nice to have an upgraded smartos image running. So I downloaded the latest release from /Joyent_Dev/public/SmartOS/20210128T022709Z and did the normal “dd” writing as described on the wiki page. (which I also have been doing for the last couple of years)

Now as soon as it boots, it is loading the image and it will do a hash check on the archive. Everytime I get a hash mismatch and the server reboots.

Loading unix...seconds. [Space] to pause        Loading

/platform/i86pc/amd64/boot_archive...       Loading

/platform/i86pc/amd64/boot_archive.hash... hash mismatch

and boom, it reboots

I tried loading with UEFI and normal. I also tried to go into the boot -s mode, but I cannot even get a prompt. It loads the kernel and than I get the hash check failure and reboots. I reverted to use my older image, the joyent_20200729T205408Z image and it boots without issues.

Dan mcDonald helped me out on the maillinglist with given me the following tip:

If you can mount the USB key with the bad archive somewhere else, you should be able to:
1.) Find the boot_archive file
2.) Find the boot_archive.hash file
3.) Run a SHA1 checksum (e.g. `digest -a sha1 boot_archive` or `openssl sha1 boot_archive`) and compare it to what's there.

I should have known better, the sdcard failed on me. Looks like some bits are broken :) everytime I get a different hash.

[root@master /tmp/mnt/platform/i86pc/amd64]# digest -a sha1 boot_archive f1cf6e1673a8ee251a1389308c1df6f6b8a57b43

[root@master /tmp/mnt/platform/i86pc/amd64]# digest -a sha1 boot_archive bc45c7a15d56bc607533aa3750372050d290cbea

As you can see, everytime something different, so lesson learned. Never trust an sd card. Always do a hash check. dd isn't safe enough

Micro8ks and Smartos

"Autonomous low-ops Kubernetes for clusters, workstations, edge and IoT"

Microk8s is a simple way to launch single node kubernetes environment for local development and/or testing and learning purposes for devops. It is a fast, small, cheap k8s for CI/CD.

Minikube is a similair tool to get a kubernetes up and running locally, but with one big difference, MiniKube spins up a VM and runs it in the VM. Microk8s doesn't need a VM, which means you get a lot more resources at your disposal. VM's are pretty heavy on a laptop.

So it sounds good to me :D Let's play with it on my smartos server.

1: Creating a KVM Ubuntu instance

First you need a KVM running with Ubuntu, I used the following setup, create a file k8s-micro.json:

{
  "brand": "bhyve",
  "alias": "bionic-k8-master",
  "ram": "2048",
  "vcpus": "2",
  "resolvers": [
    "8.8.8.8"
  ],
  "nics": [
    {
      "nic_tag": "admin",
      "gateway": "192.168.1.1",
      "netmask": "255.255.255.0",
      "ip": "192.168.1.100",
      "model": "virtio",
      "primary": true
    }
  ],
  "disks": [
    {
      "image_uuid": "c9db249c-93ba-4507-9fa4-b4d0f81265fc",
      "boot": true,
      "model": "virtio"
    }
  ],
  "customer_metadata": {
    "root_authorized_keys": "ssh-rsa INSERTKEYHERE somebody@askme",
    "cloud-init:user-data": "#cloud-config\n\nresolv_conf:\n  nameservers: ['8.8.8.8']\n\nruncmd:\n - curl -s \"https://packages.cloud.google.com/apt/doc/apt-key.gpg\" | apt-key add -\n - echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' >/etc/apt/sources.list.d/kubernetes.list\n - apt-get update\n - apt-get upgrade -y\n - apt-get install -y docker.io\n - systemctl enable docker\n - systemctl start docker\n - echo 'net.bridge.bridge-nf-call-iptables=1' >>/etc/sysctl.conf\n - sysctl -p\n - swapoff -a\n"
  }
}

I have to be honest and explain to you, my default install will automatically install docker with cloud-init.
Lets install it: vmadm install k8s-micro.json

2: Install micro8ks

Login into your new vm and install with snap (current version is 1.18):

# sudo snap install microk8s --classic --channel=1.18/stable
2020-05-19T14:58:03Z INFO Waiting for restart...
microk8s (1.18/stable) v1.18.2 from Canonical✓ installed

Make sure that the user can access the micro8ks without needing to do sudo, my user is ubuntu:

# sudo usermod -a -G microk8s ubuntu
# sudo chown -f -R ubuntu ~/.kube

To make it work, after the commands you need to logout and login again.

3: Checking the status

# microk8s status --wait-ready
microk8s is running
addons:
cilium: disabled
dashboard: disabled
dns: disabled
fluentd: disabled
gpu: disabled
helm: disabled
helm3: disabled
ingress: disabled
istio: disabled
jaeger: disabled
knative: disabled
kubeflow: disabled
linkerd: disabled
metallb: disabled
metrics-server: disabled
prometheus: disabled
rbac: disabled
registry: disabled
storage: disabled

4: Enable the standard services

As a real mimimum I can advise to atleast enable the following plugins:

# microk8s enable dns dashboard registry ingress

5: Check the dashboard

When you have the dashboard enabled you can do the following:

# kubectl proxy --accept-hosts=.* --address=0.0.0.0 &

And open in a browser: http://{{IPOFTHEMACHINE}}:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
One the first page with the KubeConfig and Token I just pressed the button skip :)

Screenshot 2020-05-20 at 09.17.54.png

Tips:

You can easily alias kubectl

# sudo snap alias microk8s.kubectl kubectl

Terraform provider for SmartOS machines

A couple of weeks ago I see an email on the smartos emailing list mentioning that John had created a terraform smartos provider for terraform.
Installing the provider is pretty easy. I did the following steps in my smartos zone:
My current go version is: go version go1.10 solaris/amd64

Make sure you have setup go correctly:

# mkdir ~/gopath
# export GOPATH=~/gopath
# go get github.com/john-terrell/terraform-provider-smartos

Lets go to the download directory and compile the source code:

# cd gopath/src/github.com/john-terrell/terraform-provider-smartos
# make build
# ls gopath/bin/terraform-provider-smartos

Now you can use it as a terraform provider. I copied the binary file into the terraform plugin dir:

# cp ~/gopath/bin/terraform-provider-smartos ~/.terraform.d/plugins

You are now good to go to follow the example in the github repo.
Kudos go to John for creating this awesome plugin!

Delegating a zfs dataset

Delegating a zfs dataset

I love SmartOS but unfortunately delegating a dataset to one of your SmartOS or LX-branded zones is not supported with vmadm. It is possible tho with zonecfg, the old way of using zfs and zones.

Make sure you stop the zone you want to add the dataset:
# vmadm halt 5b297ee0-e9ad-c834-d4b8-a4e75fd38c62

Lets create a zfs dataset:
# zfs create zones/data

Now lets edit the config:
# zonecfg -z 5b297ee0-e9ad-c834-d4b8-a4e75fd38c62

And do the following:

zonecfg:5b297ee0-...> add dataset
zonecfg:5b297ee0-...:dataset> set name=zones/data
zonecfg:5b297ee0-...:dataset> end
zonecfg:5b297ee0-...> verify
zonecfg:5b297ee0-...> exit

After that you can start you zone again and will have the zfs dataset mounted in the zone

You can also check it with vmadm:

# vmadm get 5b297ee0-e9ad-c834-d4b8-a4e75fd38c62 | json datasets
[
  "zones/data"
]
Older Posts