For some time I've used it as a profile page, but it seems fit now to actually make use of the Internet, in a proper way.
I will over time, present coding problems I've had the pleasure to solve. Photography which is a big hobby together with travel and I will be presenting some cars.
This site updated in my spare time and I use a mix of One.com hosting and DigitalOcean.
I will over time, present coding problems I've had the pleasure to solve.
Photography which is becoming a bigger hobby together with travel and lastly I will be showcasing cars.
Recently I bought a Fuji XT-30 with the kit lens 18-55 mm, I like the camera, small and compact. If you are used to a bigger camera you will be pushing the Q button on the thumb rest they added.
Reminder to upload some of the pictures from the new camera.
My camera setup is Fuji X-100 firmware upgraded to 2.0 and most of the pictures shown are taken with the default lens.
I do have the teleconverter and wide converter, but to be honest, I'm not satisfied with the result of pictures taken.
Mostly I use my HOYA 49mm UV lens, just to scratch protect my lens.
Source code repository with Cincom Smalltalk, an interesting topic, is to revisit old languages and see what is what.
Smalltalk is a delightful small language, with the idea of the grammar can fit on a business card, also they are image based, but how do you store sourcecode between family, friends and co-workers?
You install a database and connect it to your new living world that is the System image
Small information on what to do to install and manage Smalltalk source code.
# Install PostgreSQL on a compatible machine $ apt install postgresql -y # Become the postgres user and create a database and a user called BERN (its quite a famous city) $ su - postgres (as postgres) # createdb
# createuser -d -a- P BERN
Now turn your attention to your Smalltalk, you would need to run
Store.DbRegistry installDatabaseTables.and use the following details from before: Environment is the <databasename> Username is BERN Password: the one you typed in when creating BERN
You should be able to connect now and store your source code.
This is really nice as we can now work from multiple laptops, also if you configure your network correct, you can connect from outside your home and publish and retrieve your code.
You must listen to all network adapters for postgresql in /etc/postgresql/main/10/postgresql.conf
You must trust your network and change the file /etc/postgresql/main/10/pg_hba.conf, use
trustas last parameter
Your connection string will be something like this:
After many years away from JVM, I'm back using JVM and Kotlin again with Maven and all the bells and whistles.
I am not happy, compared to my newfound happiness with Makefiles and Go. JVM, Kotlin and Maven seems archaic, granted I've always had that thought, but now it seems like a pain to figure out the pom.xml, modules and waiting for the world to enter my computer by the 1000 and 1000 artifacts Maven will pull in, which half of them I don't believe I need.
My happiness is a Linux machine, with SSH, Vim and VScode with remote extensions and off I program, but not with Kotlin, Java and all the other stuff. I have to figure out the errors, in many various places.
We use AWS CDK, granted it is a nice idea, going from pure CloudFormation files, but a text file has its beauty, all can read the file, no weird syntax, just plain text up and down. It is fast to traverse and fast to deploy using my makefiles. My workflow is beautiful, pure text files.
I really hope some one will pull the plug on all the archaic systems and go back to real textfiles for most of their pipeline.
Integration with CloudFormation is a lot better, as I will be able to call out to AWS CLI and describe my stacks, pull out the values, without importing them and just work. I now if it works on my machine it works in my pipeline, I can rest assured my colleagues will be able to read the file using simple Notepad.
Without downloading half of the worlds repositories to concatenate a string...
Maven, Ant, Jvm is in the past, where the world have moved on the containerization, text files, clarity and simplicity.
Maven, come at me with your stupid dependencies, all your unknown rules and targets and be compared to pure simplicity using
make clean build deploy
mvn dockerfile:build (which doesnt work...) package or was it test before I forgot and don't care
In my professional work, I design and code full stack applications with a focus on automation and simplicity.
I've always been working with computers, since at a young age. First OS was Microsoft DOS 6 with QBasic and gorilla.bas
Today I do fullstack development, automations and pipeline work.
Tools of choice:
I will in the coming weeks, months, try to give my insight in what and how I do my development in my work and spare time.
At work and my spare time, I concentrate and spend a lot of time to minimal amount of work, that entitles me to be able to pick up my projects later and I know exactly how it works. It also gives me the added benefit that it is very easy to teach my colleagues and friends.
A little rant about Make, was created in April 1976 and was implemented using C, first used on Unix and the file format is Makefile, I find make and Makefiles to complement my build flow using Golang, Docker and various other projects, it is really nice to have a directory layout with a Makefile, where you know exactly how and what is needed to build the project.
These last years I've spent coding Golang and preferred tool to orchestrate build and deploy is make.
My Makefiles normally start with variables in the beginning, setting the SHELL, various project related environmental variables and then we begin to flesh out the targets and dependencies.
I've come to the conclusion on the preferred way to compile and dockerize using Makefiles, and below I will show a simple skeleton
SHELL ?= /bin/sh GIT_COMMIT = $(shell git rev-list HEAD -1) RELEASE?=branch TAG?=projectName ## BELOW TARGETS TO BE RUN filename_x64: go build \ -trimpath \ -race \ -ldflags "-X main.CommitID=$(GIT_COMMIT)" \ -o $@ } ./cmd/app build: filename_x64 .PHONY=strip strip: build strip -s filename_x64 .PHONY=pack pack: build strip upx -q filename_x64 .PHONY=clean clean: rm -rf filename_x64 .PHONY=docker-build docker-build: build @docker build \ --compress \ --rm \ -t $(TAG) \ -t $(TAG):$(RELEASE) \ . .PHONY=docker-push docker-push: docker-build @docker push -t $(TAG)
Another tool of trade is the Gitlab CI/CD, I'm very happy with, below you will see an example of a .gitlab-ci.yml project file.
These two snippets are useful, but it is up to viewers discretion to understand what is happening and also why didn't we have a make test target? I think it is up to you how to best test the application, and nothing another can impose on you or your project.
stages: - test - build - docker - deploy .test: &test: stage: test image: ubuntu:16.04 script: - make test test:app: <<: *test
I love working with CloudFormation and automations, lastly I've created a generic Gitlab Runner cloudformation stack, which takes 3-4 input paremeters.
What is needed if you have gitlab at work, and you cannot have Shared Runners, you create a CloudFormation stack using Amazon Linux and CFN-Bootstrap helpers
A bit of magic using my Makefile
make cf-validate cf-package cf-deploy
I've included AutoScaling policies to add more gitlab runners when the average CPU is above 60 percent, also Scale Out and Scale In policies based on time. Still haven't figured out how to have dynamic scaled for this use case where you have some developers to work late at night, then you want a machine to be able to build, but maybe that is solvable by them installing a Gitlab Runner on their own machine?
Interesting enough, when you want to do something it's never easy. In the team we had the wish to be able to deploy on other AWS Accounts, and sure you can do a AWS Code Deploy, AWS CodePipeline and then you stand with twice as much infrastructure and need to write Gitlab CI and AWS CodePipeline
Assume role to the rescue, but wait you need to parse the output from
aws sts assume-roleand what role to become?
aws sts get-caller-identityyou will be able to see who you are calling as.
Go and a little bit of magic using
os/exechelped us to make a helper function we deploy in a Docker container and then magically we have the possiblity to become the exact role and also we notify AWS on what project, that just became somebody.
I like Go, its very nice, for when you want to this kind of stuff.
The documentation is pretty decent, see more here AWS SDK for Go
Today, we are going to revisit our beloved Makefiles, you can actually make nice small easy functions to help your Makefile survive critique and spaghetti code.
Make has even built in functional map using its
$(foreach ...)Here is an example of a Makefile function
cf-deploy=aws cloudformation deploy --stack-name $1 $(if $2, --parameter-overrides )Which you can call in your targets with
$(call cf-deploy,stack-name,key=value key=value)
Below is an example where I show case pseudo—Hilbert's Curve, inspired from Coding train and fixed a bug in the example.
This site is run and maintained by David B, you can reach me on my email
All photos on this site is taken by me, all code and textual representation of code is created by me.
Copyright © 2020