Developers Planet

April 17, 2018

Bin Chen

OCI : An Open Container Spec (That Rules All)


OCI is the industry collaborated effort to define an open containers specifications regarding container format and runtime - that is the official tone and is true. The history of how it comes to where it stands today from the initial disagreement is a very interesting story or case study regarding open source business model and competition.
But past is past, nowadays, OCI is non-argumentable THE container standard, IMO, as we'll see later in the article it is adopted by most of the mainstream container implementation, including docker, and container orchestration system, such as kubernetes, Plus, it is particularly helpful to anyone trying to understand how the containers works internally. Open source code are awesome but it is double awesome with high quality documentation!

Overview

OCI has two specs, a Image spec and a Runtime spec. Below is the overview of what they cover and how they interact.
                  Image                     Runtime
|
config | runtime config
layers | rootfs
| | | delete
| | | |
| unpack | | create | start/stop/exec
Image (spec) ----|-> Bundle ----------> container -------> process
| |
| hooks
|

Image (Spec)

Image spec defines the archive format of container images, which will be unpacked to the runtime bundle from which we can run a container.
To the top level, it is just a tar ball, after untar-ed, it has a layout as below.
├── blobs
│   └── sha256
│   ├── 4297f01aae8e36da1ec85e36a3cc5a4b11aa34bcaa1d88cc9ca09469826cb2bf (image.manifest)
│   └── 7ea0496f252ea46535ea6932dc460cb7d82bfc86875d9d2586b6afa1e8807ad0 (image.config)
├── index.json
└── oci-layout

The layout isn't that useful without a specification of what that stuff is and how they are related (referenced).
We can ignore the file oci-layout for simplicity. index.json is the entry point, it contains primary a manifest. which listed all the "resources" used by a single container image. Similar to Manifest.xml file for an Android apk.
The manifest contains primarily the config and the layers.
The config contains notably 1) configurations if the image, which can and will be converted to the runtime config file of the runtime bundle, and 2) the layers, which makes up the root file system of the runtime bundle, and 3) some metadata regarding the image history.
layers are what makes up the final rootfs. The first layer is the base, all the other layers contain only the changes to its base.
Put that into a diagram, roughly this.
index.json -----> manifest ->   Config
| | ref
| |
|-------- Layers --> [ Base, upperlayer1, upperlayer2,...]

More on Layers

A config file is just a json and is easy. So the interesting part is how to represent a file system as a layer, and how to union all the layers, as we know the layers are diffs.
  • How to represent a layer?
  • For the base layer, tar all the content;
  • For non base layers, tar the changeset compared with its base.
    Hence, first detect the change, form a changeset; and then tar the changeset, as the representation of this layer.
  • How to union all the layers?
Apply all the changesets on top of the base layer. This will give you the rootfs system.

Runtime Spec

Once the Image is unpacked to a runtime bundle on the disk file system, runtime spec will take care from there. Roughly, the job is to create a container and run the (processes in the) containers.

Container lifecycle

A container has a lifecycle, at the essence, as you can imagine, it can be model as following state diagram.
You can throw in a few other actions and states, such as pause and paused, but those are the fundamental ones.
    create
+---------+ start +---------+
+---------> | created| | started |
| | +----------> | |
+---------+ +----+----+
|
v stop
+---------+ +---------+
| deleted | | stopped |
| | ------------+ | |
+---------+ +---------+
delete

note: Somehow, the left arrow (<) will sabotage the whole diagram using my current blogspot template. I just omit it until I find a time to fix it.

Image, Container, and Processes

Containers are created from (container) Image, you can create more than one containers from a single Image, and you can repack the containers, possible with changes to base image, back to a new Image.
After you get the containers, you can run process inside of that container, without all the nice things about a container, most notably, self-contained - don't depend on the host libraries.
           images               container               processes

+ +
| |
| |
create| |
+------------+ | +---------+ start | +---------+
|runtime +---------+ | created| | | started |
|Bundle | | | | +----------> | |
| | | +---------+ | +----+----+
+------------+ | | |
| | v stop
| |
| +---------+ | +---------+
| | deleted | | | stopped |
| | | ------------+ | |
| +---------+ | +---------+
| delete |
| |
| |

Implementations and Ecosystems

runC is the reference implementation of the oci runtime specification. The diagram below shows its relationship with other projects, mostly with docker origin, Each entity below follows the format of org/project.
                                 +---------------------+
| |
| dockerInc/docker |
| |
+--------+------------+
| use
+---------v-------------+
| |
| moby/moby |
| |
+---------+-------------+
| use
+-------------------+ +----------v-------------+
| | | |
| oci/runtime-spec | | containerd/containerd |
| | | |
+---------+---------+ +----------+-------------+
^ |
| | use
|impl v
| +----------------------+ +---------------------+
| | | | |
+---------------------|oci/runc +-----> |oic/runc/libcontainer|
| | | |
+----------------------+ +---------------------+
To make things looks even more crowded/flourished, throw in some kubernetes things.
CRI is the Container Runtime Interface defined by kubernetes to allows for pluggable container runtime for k8s. There are currently several implementations, among them are cri-containerd and cri-o, both are actually end up use oci/runc.
           +-------------------------------| ---------------------------------------+
| | --------------+ |
| k8s/CRI | | |
| (container runtime interface) | | |
+-------------------------------+ impl | impl |
| |
| |
+-----+--------+ +--------+------+
|cri-containerd| |cri-o |
+----------| | | |
| +--------------+ +-----+---------+ k8s
| |
+-------------------+ +----------v-------------+ | container
| | | | |
| oci/runtime-spec | | containerd/containerd | |
| | | | |
+---------+---------+ +----------+-------------+ |
^ | |use
| | use +--------------------------+
|impl v |
| +---------------------++ +---------------------+
| | | | |
+---------------------|oci/runc +-----> |oic/runc/libcontainer|
| | | |
+----------------------+ +---------------------+

summay

That's it for today.

by Bin Chen (noreply@blogger.com) at April 04, 2018 00:53

April 10, 2018

Marcin Juszkiewicz

XGene1: cursed processor?

Years ago Applied Micro (APM) released XGene processor. It went to APM BlackBird, APM Mustang, HPe M400 and several other systems. For some time there was no other AArch64 cpu available on market so those machines got popular as distribution builders, developer machines etc…

Then APM got aquired by someone, CPU part got bought by someone else and any support just vanished. Their developers moved to work on XGene2/XGene3 cpus (APM Merlin etc systems). And people woke up with not-supported hardware.

For some time it was not an issue – Linux boots, system works. Some companies got rid of their XGene systems by sending them to Linaro lab, some moved them to ‘internal use only, no external support’ queue etc.

Each mainline kernel release was “let us check what is broken on XGene this time” time. No serial console output again? Ok, we have that ugly patch for it (got cleaned and upstreamed). Now we have kernel 4.16 and guess what? Yes, it broke. Turned out that 4.15 was already faulty (we skipped it at Linaro).

Red Hat bugzilla has a Fedora bug for it. Turns out that firmware has wrong ACPI tables. Nothing new, right? We already know that it lacks PPTT for example (but it is quite new thing for processors topology). This time bug is present in DSDT one.

Sounds familiar? If you had x86 laptop about 10 years ago then it could. DSDT stands for Differentiated System Description Table. It is a major ACPI table used to describe what peripherals the machine has. And serial ports are described wrong there so kernel ignores them.

One of solutions is bundling fixed DSDT to kernel/initrd but that would require adding support for it into Debian and probably not get merged as no one needs that nowadays (unless they have XGene1).

So far I decided to stay on 4.14 for my development cartridges. It works and allows me to continue my Nova work. Do not plan to move to other platform as at Linaro we have probably over hundred XGene1 systems (M400 and Mustangs) which will stay there for development (hard to replace 4.3U case with 45 cartridges by something else).

by Marcin Juszkiewicz at April 04, 2018 09:35

April 07, 2018

Alex Bennée

Working with dired

I’ve been making a lot more use of dired recently. One use case is copying files from my remote server to my home machine. Doing this directly from dired, even with the power of tramp, is a little too time consuming and potentially locks up your session for large files. While browsing reddit r/emacs I found a reference to this post that spurred me to look at spawning rsync from dired some more.

Unfortunately the solution is currently sitting in a pull-request to what looks like an orphaned package. I also ran into some other problems with the handling of where rsync needs to be run from so rather than unpicking some unfamiliar code I decided to re-implement everything in my own package.

I’ve still got some debugging to do to get it to cleanly handle multiple sessions as well as a more detailed mode-line status. Once I’m happy I’ll tag a 0.1 and get it submitted to MELPA.

While getting more familiar with dired I also came up with this little helper:

(defun my-dired-frame (directory)
  "Open up a dired frame which closes on exit."
  (interactive)
  (switch-to-buffer (dired directory))
  (local-set-key
   (kbd "C-x C-c")
   (lambda ()
     (interactive)
     (kill-this-buffer)
     (save-buffers-kill-terminal 't))))

Which is paired with a simple alias in my shell setup:

alias dired="emacsclient -a '' -t -e '(my-dired-frame default-directory)'"

This works really nicely for popping up a dired frame in your terminal window and cleaning itself up when you exit.

by Alex at April 04, 2018 10:12

April 04, 2018

Bin Chen

GPG: The GNU Privacy Guard


We talked about Cryptography Theory before, now let's put that in practice and see some widely used cryptography tool.
GPG, GNU Privacy Guard, is an open source tool allows you to encrypt and sign your data and communications. To recap, encrypt is to ensure confidentiality, and sign is to ensure integrity and nonrepudiation.

Create key pair and share the public key

Create a key pair

$ gpg --gen-key 
gpg (GnuPG) 1.4.16; Copyright (C) 2013 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Please select what kind of key you want:
(1) RSA and RSA (default)
(2) DSA and Elgamal
(3) DSA (sign only)
(4) RSA (sign only)
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048)
Requested keysize is 2048 bits
Please specify how long the key should be valid.
0 = key does not expire
= key expires in n days
w = key expires in n weeks
m = key expires in n months
y = key expires in n years
Key is valid for? (0) 1m
Key expires at Fri 04 May 2018 08:30:15 AM AEST
Is this correct? (y/N) y

You need a user ID to identify your key; the software constructs the user ID
from the Real Name, Comment and Email Address in this form:
"Heinrich Heine (Der Dichter) "

Real name: nk
Name must be at least 5 characters long
Real name: nkkkkk
Email address: nk@email.com
Comment:
You selected this USER-ID:
"nkkkkk "

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
You need a Passphrase to protect your secret key.

gpg: key E9F66E1F marked as ultimately trusted
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0 valid: 2 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 2u
gpg: next trustdb check due at 2018-05-03
pub 2048R/E9F66E1F 2018-04-03 [expires: 2018-05-03]
Key fingerprint = FC55 55E3 7C49 FE30 1A5D E8E9 D9EC 1FB5 E9F6 6E1F
uid nkkkkk
sub 2048R/D8CB679E 2018-04-03 [expires: 2018-05-03]

Fingerprints, KeyId, User ID

Some of the gpg commands expect keyId and some expect user id.
Fingerprints is used to uniquely identify a key pair; keyId is the last 8 or 16 characters of the fingerprints, and are called short keyId and long keyId respectively. For example, in the key pair we just created,FC55 55E3 7C49 FE30 1A5D E8E9 D9EC 1FB5 E9F6 6E1F is the fingerprint, and E9F66E1F is a (short) keyId. When the gpg command expects key Id, both fingerprints and short/long keyId can be used. For technical details, check here for the rfc spec
User Id takes the format of "Real Name (Comment) ", you can use part of the User Id as the User Id. When the gpg command expects USER Id, you can use both USER ID and key ID.
Say, --export expects User ID, all the following are valid:
$ gpg --output public.key --armor --export nk@email.com
$ gpg --output public.key --armor --export nkkkkk
$ gpg --output public.key --armor --export E9F66E1F
To make things easier to remember, key fingerprints always works.

Export your public key

You will need to share your public key with others so that they can 1) encrypt the message for you 2) verify the message if from you.
Unlike ssh-keygen, you will export your public key explicitly. The output is an ASCII version of your public key and you can share it through email.
gpg --output public.key --armor --export user-id

Import your public Key

Once the others get your public key, they will need to import your key. Again, to repeat, so that they can use it to encrypt the message for you, or to verify the message is from you.

Encrypt & Decrypt

There are two user cases for Encryption, 1) you want to encrypt the message so no others can read, 2) you want others encrypt the message so that only you can read it.
In either case, the message will be encrypted with your public key and decrypt with the private key. You already have both keys so no extra steps needed regarding key; For the second case, the user first needs to import your key, and we have discussed how to do that.
Let's say Lily have a doc with some confidential content that she wants to share with me.
$ cat doc
This is a confidential doc
To encrypt:
$gpg --recipient user-id --encrypt doc
It will generate an encrypted file called doc.gpg. If you dump it, it looks like this, which is great.
$ cat doc.gpg 

���^\-<��]�WCG���O�� t���
L,���X�K9�
�O�D������œ����1L[�o
�e-�7p���悌"�BYKoc��5����ػ/�mQ��;��'�~5_�tN։�)X+�
UC�S�
�V����h�n���?̬ib8wrp����>�\˼��4Qs���K�ft��a8=���'~C.�\%妌.{�G��DJ��#?sc
After receiving it, you will decrypt it using your private key, and you will be asked for passphrase you specified when initially creating the key to unlock your private key. This is an extra security measure to ensure even when people managed to get your security key there still more work to do.
$ gpg --output doc.dec --decrypt doc.gpg

You need a passphrase to unlock the secret key for
user: "nkkkkk "
2048-bit RSA key, ID D8CB679E, created 2018-04-03 (main key ID E9F66E1F)

gpg: encrypted with 2048-bit RSA key, ID D8CB679E, created 2018-04-03
"nkkkkk "
And, we have the clear text:
$ cat doc.dec 
This is a confidential doc

Sign & Verify

$ gpg --output doc.sig --clearsign doc
You need a passphrase to unlock the secret key for
user: "nkkkkk "
2048-bit RSA key, ID E9F66E1F, created 2018-04-03
$ cat doc.sig 
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

This is a confidential doc
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJaxCjbAAoJENnsH7Xp9m4fKB8H/Aqu6PZsnZfqjst+kNxJbNRj
V2MryeO6l6K4UgPevH7C92qY1JEzUyVok2Eqecfb+rKwjYOzQGtAYa+nWVLsJMOi
5XvGJYNdLsMmuW4/dB8K2mnZXczaZpMKUab7LZ3BzQI5Kg5LYchMuwViL6f8PLEN
KmAR3H3CQmR/ZsU5YHi4uy2Fq/ujMLhEt1Uu2qMhocwj1ZJfZj/aHsvl4A2YtlGD
DmFDllFgv5MvTuAduBQ4jG+g09Jn9mJ0Cf7I6ozAbCxu+bm3vrkymYUqTvq2/szM
zs55qGxz5oJTnzjOf0+N95e9LDtzrTKoNzZfDzU0SdJDnG+h0E1hpZBrR5Xi+Zo=
=ut47
-----END PGP SIGNATURE-----
To verify
$ gpg --verify doc.sig
gpg: Signature made Wed 04 Apr 2018 11:22:35 AM AEST using RSA key ID E9F66E1F
gpg: Good signature from "nkkkkk "
If someone managed to modify the signed message, say adding "oh no" after "This is a confidential doc".
$ cat doc.modified 
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

This is a confidential doc oh no
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJaxCjbAAoJENnsH7Xp9m4fKB8H/Aqu6PZsnZfqjst+kNxJbNRj
V2MryeO6l6K4UgPevH7C92qY1JEzUyVok2Eqecfb+rKwjYOzQGtAYa+nWVLsJMOi
5XvGJYNdLsMmuW4/dB8K2mnZXczaZpMKUab7LZ3BzQI5Kg5LYchMuwViL6f8PLEN
KmAR3H3CQmR/ZsU5YHi4uy2Fq/ujMLhEt1Uu2qMhocwj1ZJfZj/aHsvl4A2YtlGD
DmFDllFgv5MvTuAduBQ4jG+g09Jn9mJ0Cf7I6ozAbCxu+bm3vrkymYUqTvq2/szM
zs55qGxz5oJTnzjOf0+N95e9LDtzrTKoNzZfDzU0SdJDnG+h0E1hpZBrR5Xi+Zo=
=ut47
-----END PGP SIGNATURE-----
The verification will fail and the receiver will know the message can't be trusted.
$ gpg --verify doc.modified 
gpg: Signature made Wed 04 Apr 2018 11:22:35 AM AEST using RSA key ID E9F66E1F
gpg: BAD signature from "nkkkkk "

Sending your public key to Key Server and let the whole world know

Sending public key using email doesn't sound quite cool in the modern age. You can publish your public key in GPG key servers so others can search and import your public key from the server.
Send it,
$ gpg --keyserver pgp.mit.edu --send-keys E9F66E1F
gpg: sending key nk@email.com to hkp server pgp.mit.edu
Search it,
$ gpg --keyserver pgp.mit.edu --search-keys nk@email.com
gpg: searching for "nk@email.com" from hkp server pgp.mit.edu
(1) nkkkkk
2048 bit RSA key E9F66E1F, created: 2018-04-03, expires: 2018-05-03
Keys 1-1 of 1 for "nk@email.com".
Enter number(s), N)ext, or Q)uit > Q
We typed Q)uit here, so we just search the key without importing it. If we had typed the 1, which is the key number, the key E9F66E1F would have been imported.
You can also import the key using keyId (not user id) directly.
$ gpg --keyserver pgp.mit.edu --recv-keys E9F66E1F
gpg: requesting key E9F66E1F from hkp server pgp.mit.edu
gpg: key E9F66E1F: "nkkkkk " not changed
gpg: Total number processed: 1
gpg: unchanged: 1

Summary

With GPG, you'll be able to encrypt and sign your data and communications, improving security and protecting privacy. Plus, it makes you look cool or "geeky" by showing a GPG public key fingerprints in your name card or twitter intro, even you have never ever used it.
FC55 55E3 7C49 FE30 1A5D E8E9 D9EC 1FB5 E9F6 6E1F

by Bin Chen (noreply@blogger.com) at April 04, 2018 04:40

April 02, 2018

Gema Gomez

What to make next?

One of the most complicated parts of the fiber crafts, and a part that normally takes at least a couple of weeks for me, is the planning phase. As soon as you are done with a project, you try to figure out what you want to do next. The first step is to decide what I feel inspired to make:

  • Quick project
  • Long and intrincate project
  • Use existing yarn project
  • Use existing pattern project
  • Learn a new skill only project
  • Garment or accessory project
  • Something I have done before or something new
  • Who will be the owner? Is it for me? Someone in my family? Friends? Special occassion?

In my case, it depends on the time of the year, the plans I have for the coming months, whether I have stumbled upon something super cool that I could make for someone and how much spare time I have over the coming months.

The first thing I decided is I want to use this gorgeus variegated yarn I bought a few months back:

Yarn

I only have one skein, it is 100% merino, Unic from Bergere. The weight of it is DK, but it comes on 4ply untangled fibre, so it will be like working with 4 strands of fingering yarn at once. I have 660m of material (200g).

With this amount of yarn I cannot really make an adult size garment, but I could make a rather gorgeous complement, either cowl, infinity scarf or a shawl. I could also make a garment for a child or a baby. The changing color of the fibre also makes for a nice color effect if I were to find the right pattern for it.

Q&A

Knitting or crochet?

Either one would work for me this time around.

What are you making? For whom?

Something easy and quick that showcases the yarns color. Probably a cowl/shawl/infity scarf for myself. Not in the mood for learning a new skill, so a pattern with some known techniques will have to do.

Which patterns are worth considering? Are there any nice examples out there of projects made with this yarn?

I looked at the patterns showcased by the manufacturer of the yarn, but none of them were really my cup of tea. Kept searching until I found a book of shawls that has patterns specific for variegated yarn like this one. I bought the book yesterday and I am trying to decide which one to make, it is called The Shawl Project: Book Four, by The Crochet Project.

Now the only question left is to figure out which of the projects in the book I like best and get crocheting. Will post a picture of the project when it is finished!

by Gema Gomez at April 04, 2018 23:00

April 01, 2018

Gema Gomez

Olca Cowl

As part of my yarn shopping spree in San Francisco last October, I bought some Berroco Mykonos (66% linen, 26% nylon, 8% cotton), color hera (8570). I decided to make a crocheted Olca Cowl with it, it required 2 x 50g hanks (260 m):

Olca cowl finished

The pattern was followed verbatim, I used a 3.75mm (F) hook as per pattern description:

hook and yarn

This was a quick and fun pattern to work, I managed to finish it in about a month of spare time. I recommend it for any advanced crochet beginner. Once the three first rows are worked, the rest is mechanic and quick to grow.

by Gema Gomez at April 04, 2018 23:00

March 30, 2018

Naresh Bhat

Benchmarking BigData


Purpose:

The purpose of this blog is try to explain about different types of benchmark tools available for BigData components.  We did a talk on BigData benchmark Linaro Connect @LasVegas in 2016. This is one of my effort to collectively put into a one place with more information.

We have to remember that all the BigData/components/benchmarks are developed 
  • Keeping in mind x86 architecture.  
    • So in first place we should make sure that all the relevant benchmark tools compile and run it on AArch64.  
    • Then we should go ahead and try to optimize the same for AArch64.
Different types of benchmarks and standards
  • Micro benchmarks: To evaluate specific lower-level, system operations
    • E.g. HiBench, HDFS DFSIO, AMP Lab Big Data Benchmark, CALDA, Hadoop Workload Examples (sort, grep, wordcount and Terasort, Gridmix, Pigmix)
  • Functional/Component benchmarks: Specific to low level function
    • E.g. Basic SQL: Individual SQL operations like select, project, join, Order-by..
  • Application level
    • Bigbench
    • Spark bench
The below table explains different types of benchmark
Benchmark Efforts - Microbenchmarks
Workloads
Software Stacks
Metrics
DFSIO
Generate, read, write, append, and remove data for MapReduce jobs
Hadoop
Execution Time, Throughput
HiBench
Sort, WordCount, TeraSort, PageRank, K-means, Bayes classification, Index
Hadoop and Hive
Execution Time, Throughput, resource utilization
AMPLab benchmark
Part of CALDA workloads (scan, aggregate and join) and PageRank
Hive, Tez
Execution Time
CALDA
Load, scan, select, aggregate and join data, count URL links
Hadoop, Hive
Execution Time

Benchmark Efforts - TPC
Workloads
Software Stacks
Metrics
TPCx-HS
HSGen, HSData, Check, HSSort and HSValidate
Hadoop
Performance, price and energy
TPC-H
Datawarehousing operations
Hive, Pig
Execution Time, Throughput
TPC-DS
Decision support benchmark
Data loading, queries and maintenance
Hive, Pig
Execution Time, Throughput

Benchmark Efforts - Synthetic
Workloads
Software Stacks
Metrics
SWIM
Synthetic user generated MapReduce jobs of reading, writing, shuffling and sorting
Hadoop
Multiple metrics
GridMix
Synthetic and basic operations to stress test job scheduler and compression and decompression
Hadoop
Memory, Execution Time, Throughput
PigMix
17 Pig specific queries
Hadoop, Pig
Execution Time
MRBench
MapReduce benchmark as a complementary to TeraSort - Datawarehouse operations with 22 TPC-H queries
Hadoop
Execution Time
NNBench
Load testing namenode and HDFS I/O with small payloads
Hadoop
I/O
SparkBench
CPU, memory and shuffle and IO intensive workloads. Machine Learning, Streaming, Graph Computation and SQL Workloads
Spark
Execution Time, Data process rate
BigBench
Interactive-based queries based on synthetic data
Hadoop, Spark
Execution Time

Benchmark Efforts
Workloads
Software Stacks
Metrics
BigDataBench
1. Micro Benchmarks (sort, grep, WordCount);
2. Search engine workloads (index, PageRank);
3. Social network workloads (connected components (CC), K-means and BFS);
4. E-commerce site workloads (Relational database queries (select, aggregate and join), collaborative filtering (CF) and Naive Bayes;
5. Multimedia analytics workloads (Speech Recognition, Ray Tracing, Image Segmentation, Face Detection);
6. Bioinformatics workloads
Hadoop, DBMSs, NoSQL systems, Hive, Impala, Hbase, MPI, Libc, and other real-time analytics systems
Throughput,
Memory, CPU (MIPS, MPKI - Misses per instruction)

Let's go through each of the benchmark in detail.

Hadoop benchmark and test tool:

The hadoop source comes with a number of bench marks. The TestDFSIO, nnbench, mrbench are in hadoop-*test*.jar file and the TeraGen, TeraSort, TeraValidate are in hadoop-*examples*.jar file in the source code of hadoop.

You can check it using the command

       $ cd /usr/local/hadoop
       $ bin/hadoop jar hadoop-*test*.jar
       $ bin/hadoop jar hadoop-*examples*.jar

While running the benchmarks you might want to use time command which measure the elapsed time.  This saves you the hassle of navigating to the hadoop JobTracker interface.  The relevant metric is real value in the first row.

      $ time hadoop jar hadoop-*examples*.jar ...
      [...]
      real    9m15.510s
      user    0m7.075s
      sys     0m0.584s

TeraGen, TeraSort and TeraValidate

This is a most well known Hadoop benchmark.  The TeraSort is to sort the data as fast as possible.  This test suite combines HDFS and mapreduce layers of a hadoop cluster.  The TeraSort benchmark consists of 3 steps Generate input via TeraGen, Run TeraSort on input data and Validate sorted output data via TeraValidate.  We have a wikipage which explains about this test suite.  You can refer Hadoop Build Install And Run Guide

TestDFSIO

It is part of hadoop-mapreduce-client-jobclient.jar file.  The Stress test I/O performance (throughput and latency) on a clustered setup.  This test will shake out the hardware, OS and Hadoop setup on your cluster machines (NameNode/DataNode).  The tests are run as a MapReduce job using 1:1 mapping (1 map / file).  This test is helpful to discover performance bottlenecks in your network.  The benchmark write test follow up with read test.  You can use the switch case -write for write tests and -read for read tests.  The results are stored by default in TestDFSIO_results.log. You can use following switch case -resFile to choose different file name.

MR(Map Reduce) Benchmark for MR

The test loops a small job in number of times.  It checks whether small job runs are responsive and running efficiently on your cluster.  It puts focus on MapReduce layer as its impact on the HDFS layer is very limited.  The multiple parallel MRBench issue is resolved.  Hence you can run it from different boxes.

Test command to run 50 small test jobs
      $ hadoop jar hadoop-*test*.jar mrbench -numRuns 50

Exemplary output, which means in 31 sec the job finished
      DataLines       Maps    Reduces AvgTime (milliseconds)
      1               2       1       31414

NN (Name Node) Benchmark for HDFS

This test is useful for load testing the NameNode hardware &amp; configuration.  The benchmark test generates a lot of HDFS related requests with normally very small payloads.  It puts a high HDFS management stress on the NameNode.  The test can be simultaneously run from several machines e.g. from a set of DataNode boxes in order to hit the NameNode from multiple locations at the same time.


The TPC is a non-profit, vendor-neutral organization. The reputation of providing the most credible performance results to the industry. The TPC is a role of “consumer reports” for the computing industry.  It is a solid foundation for complete system-level performance.  The TPC is a methodology for calculating total-system-price and price-performance.  This is a methodology for measuring energy efficiency of complete system 

TPC Benchmark 
  • TPCx-HS
We have a collaborate page TPCxHS  The X: Express, H: Hadoop, S: Sort.  The TPCx-HS kit contains TPCx-HS specification documentation, TPCx-HS User's guide documentation, Scripts to run benchmarks and Java code to execute the benchmark load. A valid run consists of 5 separate phases run sequentially with overlap in their execution The benchmark test consists of 2 runs (Run with lower and higher TPCx-HS Performance Metric).  There is no configuration or tuning changes or reboot are allowed between the two runs.

TPC Express Benchmark Standard is easy to implement, run and publish, and less expensive.  The test sponsor is required to use TPCx-Hs kit as it is provided.  The vendor may choose an independent audit or peer audit which is 60 day review/challenge window apply (as per TPC policy). This is approved by  super majority of the TPC General Council. All publications must follow the TPC Fair Use Policy.
  • TPC-H
    • TPC-H benchmark focuses on ad-hoc queries
The TPC Benchmark™H (TPC-H) is a decision support benchmark. It consists of a suite of business oriented ad-hoc queries and concurrent data modifications. The queries and the data populating the database have been chosen to have broad industry-wide relevance. This benchmark illustrates decision support systems that examine large volumes of data, execute queries with a high degree of complexity, and give answers to critical business questions. The performance metric reported by TPC-H is called the TPC-H Composite Query-per-Hour Performance Metric (QphH@Size), and reflects multiple aspects of the capability of the system to process queries. These aspects include the selected database size against which the queries are executed, the query processing power when queries are submitted by a single stream, and the query throughput when queries are submitted by multiple concurrent users. The TPC-H Price/Performance metric is expressed as $/QphH@Size.
  • TPC-DS
    • This is the standard benchmark for decision support
The TPC Benchmark DS (TPC-DS) is a decision support benchmark that models several generally applicable aspects of a decision support system, including queries and data maintenance. The benchmark provides a representative evaluation of performance as a general purpose decision support system. A benchmark result measures query response time in single user mode, query throughput in multi user mode and data maintenance performance for a given hardware, operating system, and data processing system configuration under a controlled, complex, multi-user decision support workload. The purpose of TPC benchmarks is to provide relevant, objective performance data to industry users. TPC-DS Version 2 enables emerging technologies, such as Big Data systems, to execute the benchmark.
  • TPC-C
    • TPC-C is an On-Line Transaction Processing Benchmark

Approved in July of 1992, TPC Benchmark C is an on-line transaction processing (OLTP) benchmark. TPC-C is more complex than previous OLTP benchmarks such as TPC-A because of its multiple transaction types, more complex database and overall execution structure. TPC-C involves a mix of five concurrent transactions of different types and complexity either executed on-line or queued for deferred execution. The database is comprised of nine types of tables with a wide range of record and population sizes. TPC-C is measured in transactions per minute (tpmC). While the benchmark portrays the activity of a wholesale supplier, TPC-C is not limited to the activity of any particular business segment, but, rather represents any industry that must manage, sell, or distribute a product or service.

TPC vs SPEC models

Here is our comparison between TPC Vs SPEC model benchmark

TPC modelSPEC model
Specification basedKit based
Performance, Price, energy in one benchmarkPerformance and energy in separate benchmarks
End-to-EndServer centric
Multiple tests (ACID, Load)Single test
Independent ReviewSummary disclosure
Full disclosureSPEC research group ICPE
TPC Technology conferenceSPEC Research Group, ICPE (International
Conference on Performance Engineering)



BigBench is a joint effort with partners in industry and academia on creating a comprehensive and standardized BigData benchmark. One of the reference reading about BigBench Toward An Industry Standard Benchmark for BigData Analytics  BigBench builds upon and borrows elements from existing benchmarking efforts (such as TPC-xHS, GridMix, PigMix, HiBench, Big Data Benchmark, YCSB and TPC-DS).  BigBench is a specification-based benchmark with an open-source reference implementation kit. As a specification-based benchmark, it would be technology-agnostic and provide the necessary formalism and flexibility to support multiple implementations.  It is focused around execution time calculation Consists of around 30 queries/workloads (10 of them are from TPC).  The drawback is, it is a structured-data-intensive benchmark.  

Spark Bench for Apache Spark

We are able to build on ARM64. The setup completed for single node but run scripts are failing. When spark bench examples are run, a KILL signal is observed which terminates all workers.  This is still under investigation as there are no useful logs to debug. No proper error description and lack of documentation is a challenge. A ticket is already filed on spark bench git which is unresolved.


It is based on TPC-H and TPC-DS benchmarks.  You can exeriment Apache Hive at any data scale. The benchmark contains data generator  and set of queries.  This is very useful to test the basic Hive performance on large data sets.  We have a wiki page for Hive TestBench


This is a stripped-down version of common Mapreduce jobs. (sorting text data and SequenceFiles).  Its a tool for benchmarking Hadoop clusters.  This is a trace based benchmark for MapReduce.  It 
evaluate MapReduce and HDFS performance. 

It submits a mix of synthetic jobs , modeling a profile mined from production loads.  The benchmark attempt to model the resource profiles of production jobs to identify bottlenecks

Basic command line usage:

 $ hadoop gridmix [-generate ] [-users ]
                - Destination directory
                - Path to a job trace

Con - Challenging to explore the performance impact of combining or separating workloads, e.g., through consolidating from many clusters.


The PigMix is a set of queries used test pig component performance.  There are queries that test latency (How long it takes to run this query ?).  The queries that test scalability (How many fields or records can ping handle before it fails ?).

Usage: Run the below commands from pig home

ant -Dharness.hadoop.home=$HADOOP_HOME pigmix-deploy (generate test dataset)
ant -Dharness.hadoop.home=$HADOOP_HOME pigmix (run the PigMix benchmark)

The documentation can be found at Apache pig - https://pig.apache.org/docs/ 


This benchmark enables rigorous performance measurement of MapReduce systems.  The benchmark contains suites of workloads of thousands of jobs, with complex data, arrival, and computation patterns.  Informs both highly targeted, workload specific optimizations.  This tool is highly recommended for MapReduce operators  The performance measurement - https://github.com/SWIMProjectUCB/SWIM/wiki/Performance-measurement-by-executing-synthetic-or-historical-workloads 


This is a BigData Benchmark from AMPLab, UC Berkeley provides quantitative and qualitative comparisons of five systems
  • Redshift – a hosted MPP database offered by Amazon.com based on the ParAccel data warehouse
  • Hive – a Hadoop-based data warehousing system
  • Shark – a Hive-compatible SQL engine which runs on top of the Spark computing framework
  • Impala – a Hive-compatible* SQL engine with its own MPP-like execution engine
  • Stinger/Tez – Tez is a next generation Hadoop execution engine currently in development
This benchmark measures response time on a handful of relational queries: scans, aggregations, joins, and UDF’s, across different data sizes.


This is a specification based benchmark.  The two key components: A data model specification and a workload/query specification. It's a comprehensive end-to-end big data benchmark suite.  The git hub for BigDataBenchmark

BigDataBench is a benchmark suite for scale-out workloads, different from SPEC CPU (sequential workloads), and PARSEC (multithreaded workloads). Currently, it simulates five typical and important big data applications: search engine, social network, e-commerce, multimedia data analytics, and bioinformatics.

Currently, BigDataBench includes 15 real-world data sets, and 34 big data workloads.


This benchmark test suite is for Hadoop.  It contains 4 different categories tests, 10 workloads and 3 types.  This is a best benchmark with metrics: Time (sec) &amp; Throughput (Bytes/Sec)

Screenshot from 2016-09-22 18:32:56.png


References

https://www2.eecs.berkeley.edu/Pubs/TechRpts/2011/EECS-2011-21.pdf 

Terasort, TestDFSIO, NNBench, MRBench

https://wiki.linaro.org/LEG/Engineering/BigData
https://wiki.linaro.org/LEG/Engineering/BigData/HadoopTuningGuide 
https://wiki.linaro.org/LEG/Engineering/BigData/HadoopBuildInstallAndRunGuide 
http://www.michael-noll.com/blog/2011/04/09/benchmarking-and-stress-testing-an-hadoop-cluster-with-terasort-testdfsio-nnbench-mrbench/ 

GridMix3, PigMix, HiBench, TPCx-HS, SWIM, AMPLab, BigBench

https://hadoop.apache.org/docs/current/hadoop-gridmix/GridMix.html 
https://cwiki.apache.org/confluence/display/PIG/PigMix 
https://wiki.linaro.org/LEG/Engineering/BigData/HiBench 
https://wiki.linaro.org/LEG/Engineering/BigData/TPCxHS 
https://github.com/SWIMProjectUCB/SWIM/wiki 
https://github.com/amplab
 https://github.com/intel-hadoop/Big-Data-Benchmark-for-Big-Bench 
http://www.academia.edu/15636566/Handbook_of_BigDataBench_Version_3.1_A_Big_Data_Benchmark_Suite 



Industry Standard benchmarks

TPC - Transaction Processing Performance Council http://www.tpc.org 
SPEC - The Standard Performance Evaluation Corporation https://www.spec.org 
CLDS - Center for Largescale Data System Research http://clds.sdsc.edu/bdbc 

by Naresh (noreply@blogger.com) at March 03, 2018 09:30

March 29, 2018

Marcin Juszkiewicz

Shenzhen trip

Few months ago, at the end of previous Linaro Connect gathering, there was announcement that next one will take place in Hong Kong. This gave me idea of repeating Shenzhen trip but in a bit longer version.

So I mailed people at Linaro and there were some responses. We quickly agreed on going there before Connect. Alex, Arnd, Green and me were landing around noon, Riku a few hours later so we decided that we will meet in Shenzhen.

We crossed border in Lok Ma Chau, my visa had the highest price again and then we took a taxi to the Maker Hotel (still called “Quchuang Hotel” in Google Maps and on Booking.com) next to all those shops we wanted to visit. Then went for quick walk through Seg Electronics Market. Lot of mining gear. 2000W power supplies, strange PCI Express expanders etc. Dinner, meeting with Riku and day ended.

I have woken up at 02:22 and was not able to fall asleep. Around 6:00 it turned out that rest of team is awake as well so we decided to go around and search for some breakfast. Deserted streets looked a bit weird.

Back at hotel we were discussing random things. Then someone from Singapore joined and we were talking about changes in how Shenzhen stores/factories operate. He told us that there is less and less of stores as business moves to the Internet. Then some Chinese family came with about seven years old boy. He said something, his mother translated and it turned out that he wanted to touch my beard. As it was not the first time my beard got such attention I allowed him. That surprise on his face was worth it. And then we realized that we have not seen bearded Chinese man on a street.

As stores were opening at 10:00 we still had a lot of time so went for random walk. Including Shenzhen Center Park which is really nice place:

Then stores started to open. Fake phones, real phones, tablets, components, devices, misc things… Walking there was fun itself. Bought some items from my list.

They also had a lot of old things. Intel Overdrive system for example or 386/486 era processors and FPUs.

From weird things: 3.5″ floppy disks and Intel Xeon Platinum 8175 made for Amazon cloud only.

Lot and lot of stuff everywhere. Need power supply? There were several stores with industrial ones, regulated ones etc. Used computers/laptops? Piles after piles. New components? Lot to choose from. Etc, etc, etc…

After several hours we finally decided to go back to Hong Kong and rest. The whole trip was fun. I really enjoyed it. Even without getting half of items from my ‘buy while in Shenzhen’ list ;D

And ordered Shenzhen fridge magnet on Aliexpress… They were not available to buy at any place we were.

by Marcin Juszkiewicz at March 03, 2018 11:54

March 26, 2018

Marcin Juszkiewicz

25 years of Red Hat

Years ago I bought Polish translation of “Under the radar” book about how Red Hat was started. Was a good read and went to bookshelf.

Years passed. In meantime I got hired by Red Hat. To work on Red Hat Enterprise Linux. For AArch64 architecture.

Then one day I was talking with my wife about books and I looked at shelf. And found that book again. Took it and said:

You know, when I bought that book I did not even dreamt that one day I will be working at Red Hat.

Today company turned 25. Amount of time longer than my career. I remember how surprised I was when realised that some of my friends work at company for 20 years already.

This is the oldest company I worked for. Directly at least as some of the customers of companies I worked in past were probably older. And hope that one day my work title will be “Retired Software Engineer” as my wife once said. And that will be at this company.

by Marcin Juszkiewicz at March 03, 2018 18:35

Alex Bennée

Solving the HKG18 puzzle with org-mode

One of the traditions I like about Linaro’s Connect event is the conference puzzle. Usually set by Dave Piggot they provide a challenge to your jet lagged brain. Full disclosure: I did not complete the puzzle in time. In fact when Dave explained it I realised the answer had been staring me in the face. However I thought a successful walk through would make for a more entertaining read 😉

First the Puzzle:

Take the clues below and solve them. Once solved, figure out what the hex numbers mean and then you should be able to associate each of the clue solutions with their respective hex numbers.

Clue Hex Number
Lava Ale Code 1114DBA
Be Google Roe 114F6BE
Natural Gin 114F72A
Pope Charger 121EE50
Dolt And Hunk 12264BC
Monk Hops Net 122D9D9
Is Enriched Tin 123C1EF
Bran Hearing Kin 1245D6E
Enter Slim Beer 127B78E
Herbal Cabbages 1282FDD
Jan Venom Hon Nun 12853C5
A Cherry Skull 1287B3C
Each Noun Lands 1298F0B
Wave Zone Kits 12A024C
Avid Null Sorts 12A5190
Handcars All Trim 12C76DC

Clues

It looks like all the clues are anagrams. I was lazy and just used the first online anagram solver that Google pointed me at. However we can automate this by combining org-mode with Python and the excellent Beautiful Soup library.

from bs4 import BeautifulSoup
import requests
import re

# ask internet to solve the puzzle
url="http://anagram-solver.net/%s" % (anagram.replace(" ", "%20"))
page=requests.get(url)

# fish out the answers
soup=BeautifulSoup(page.text)
answers=soup.find("ul", class_="answers")
for li in answers.find_all("li"):
    result = li.text
    # filter out non computer related or poor results
    if result in ["Elmer Berstein", "Tim-Berners Lee", "Babbage Charles", "Calude Shannon"]:
        continue
    # filter out non proper names
    if re.search("[a-z] [A-Z]", result):
        break

return result

So with :var anagram=clues[2,0] we get

Ada Lovelace

I admit the “if result in []” is a bit of hack.

Hex Numbers

The hex numbers could be anything. But lets first start by converting to something else.

Hex Prompt Number
1114DBA 17911226
114F6BE 18151102
114F72A 18151210
121EE50 19000912
12264BC 19031228
122D9D9 19061209
123C1EF 19120623
1245D6E 19160430
127B78E 19380110
1282FDD 19410909
12853C5 19420101
1287B3C 19430204
1298F0B 19500811
12A024C 19530316
12A5190 19550608
12C76DC 19691228

The #+TBLFM: is $1='(identity remote(clues,@@#$2))::$2='(string-to-number $1 16)

This is where I went down a blind alley. The fact all they all had the top bit set made me think that Dave was giving a hint to the purpose of the hex number in the way many cryptic crosswords do (I know he is a fan of these). However the more obvious answer is that everyone in the list was born in the last millennium.

Looking up Birth Dates

Now I could go through all the names by hand and look up their birth dates but as we are automating things perhaps we can use computers for what they are good at. Unfortunately there isn’t a simple web-api for looking up this stuff. However there is a project called DBpedia which takes Wikipedia’s data and attempts to make it semantically useful. We can query this using a semantic query language called SparQL. If only I could call it from Emacs…

PREFIX dbr: <http://dbpedia.org/resource/>
PREFIX dbo: <http://dbpedia.org/ontology/>
PREFIX dbp: <http://dbpedia.org/property/>

select ?birthDate where {
  { dbr:$name dbo:birthDate ?birthDate }
  UNION
  { dbr:$name dbp:birthDate ?birthDate }
}

So calling with :var name="Ada_Lovelace" we get

"birthDate"
1815-12-10
1815-12-10

Of course it shouldn’t be a surprise this exists. And in what I hope is a growing trend sparql-mode supports org-mode out of the box. The $name in the snippet is expanded from the passed in variables to the function. This makes it a general purpose lookup function we can use for all our names.

There are a couple of wrinkles. We need to format the name we are looking up with underscores to make a valid URL. Also the output spits out a header and possible multiple birth dates. We can solve this with a little wrapper function. It also introduces some rate limiting so we don’t smash DBpedia’s public SPARQL endpoint.

;; rate limit
(sleep-for 1)
;; do the query
(let* ((str (s-replace-all '((" " . "_") ("Von" . "von")) name))
       (ret (eval
             (car
              (read-from-string
               (format "(org-sbe get-dob (name $\"%s\"))" str))))))
  (string-to-number (replace-regexp-in-string "-" "" (car (cdr (s-lines ret))))))

Calling with :var name="Ada Lovelace" we get

18151210

Full Solution

So now we know what we are doing we need to solve all the puzzles and lookup the data. Fortunately org-mode’s tables are fully functional spreadsheets except they are not limited to simple transformations. Each formula can be a fully realised bit of elisp, calling other source blocks as needed.

Clue Solution DOB
Herbal Cabbages Charles Babbage 17911226
Be Google Roe George Boole 18151102
Lava Ale Code Ada Lovelace 18151210
A Cherry Skull Haskell Curry 19000912
Jan Venom Hon Nun John Von Neumann 19031228
Pope Charger Grace Hopper 19061209
Natural Gin Alan Turing 19120623
Each Noun Lands Claude Shannon 19160430
Dolt And Hunk Donald Knuth 19380110
Is Enriched Tin Dennis Ritchie 19410909
Bran Hearing Kin Brian Kernighan 19420101
Monk Hops Net Ken Thompson 19430204
Wave Zone Kits Steve Wozniak 19500811
Handcars All Trim Richard Stallman 19530316
Enter Slim Beer Tim Berners-Lee 19550608
Avid Null Sorts Linus Torvalds 19691228

The #+TBLFM: is $1='(identity remote(clues,@@#$1))::$2='(org-sbe solve-anagram (anagram $$1))::$3='(org-sbe frob-dob (name $$2))

The hex numbers are helpfully sorted so as long as we sort the clues table by the looked up date of birth using M-x org-table-sort-lines we are good to go.

You can find the full blog post in raw form here.

by Alex at March 03, 2018 10:19

March 22, 2018

Naresh Bhat

A dream come true: Himalayan Odyssey - 2016 (Day-0 to 5)

History:

THE HIMALAYAS as most everyone knows are the highest mountains in the world, with 30 peaks over 24,000 feet. The adventure of a lifetime doesn't get much bigger or higher than riding and chasing mountains of the Himalayas.

The Royal Enfield (RE) motorcycles are manufactured and sold in INDIA since 1907. These motorcycles are best suited for INDIAN road conditions. These motorcycles are used by INDIAN ARMY from the period of second world war.

There is a saying "FOUR WHEELS MOVE THE BODY-BUT TWO WHEELS MOVE THE SOUL". I am a motorcycle enthusiast from my childhood days. I always had dreams to own a RE motorcycle after getting into a job. Right now I own two variants of RE motorcycles, “Royal Enfield Thunderbird Twinspark” (TBTS) and “Squadron Blue Classic Dispatch” which is a Limited Edition.

Thunder Bird Twin Spark 350cc 2011 model

Squadron Blue Dispatch 500cc  2015 model


The TBTS is 350CC, good for cruising on long stretched highways.  The dispatch is 500CC EFI engine which gives quick response to throttle.  Hence I decided to take classic 500CC motorcycle for Himalayan Odyssey (HO).

In INDIA, Royal Enfield conducts different motorcycling tours e.g. HO, Tour of Tibet, Tour of Nepal, Tour of Rajasthan..etc.  But out of all these tours it is  considered that HO is the toughest one.  The reason is very simple, riding on Himalayan mountains are not that easy by considering road conditions, unpredictable weather, high altitudes..etc.  The Himalayan mountain roads are completely shutdown for 6 months. The INDIAN army clears the snow, opens and maintains it for another 6 months. Every year, army announces the open and close dates.

From past 12 years RE is conducting HO. I took part in HO-2016, the 13th HO - “18 DAYS LIKE NO OTHER IN RIDING” . It was conducted between 6th to 23rd July 2016. Our group had 70 men and 14 women from all over the world.  The men and women odyssey route were different, but they meet at LADAKH. Again take separate route and meet last day celebration party in Chandigarh. Men's group route map is as below.


HO Preparation:


It takes lot efforts to convince our family and making suitable arrangements at office. I was planning HO ride from last 5 years by accumulating leaves. I was trying to physically be fit as much as possible by doing exercises on regular basis.  After doing registration it is required to go through physical fitness test and submit those documents.  The physical fitness test includes 5KM run and 50 push-ups in 45 min.  There is also a physical fitness certificate from local doctor and you need to submit to RE. Documents to be submitted include medical test reports for blood, urine and Treadmill test (TMT), Medical history by self, Medical check-up fitness certificate by doctor and Indemnity bond.

The HO team includes a doctor, backup van, mechanics, media people, 3-4 lead riders from RE etc. All the information will be communicated to you post registration.

The HO ride starts from Delhi and ends at Chandigarh.  I am located in  Bangalore and hence I also had to plan to reach Delhi on July 7th with my Motorcycle.  I knew I would need 3 days to reach Delhi from Bangalore via road.  Since I had  very limited amount of time, I planed to ship my motorcycle via containers and fly to Delhi.  The transport of my motorcycle I coasted INR Rs.5780.00 one way. Actually the cost of transportation of my motorcycle was more than my air tickets 😅.  The flight tickets round trip cost INR Rs. 7000.00.  Once you register for the HO trip they will include you in closed facebook, whatsapp group.  It will be very easy to discuss all your questions in those groups.

Ready to ship
I used VRL logistics (Vijayanand Road Lines) to ship my bike from Banglore to Delhi.  Many of you may ask why can't I just rent a motorcycle at Delhi ?  This is just because if I ride my own motorcycle in mountains,  I will understand my motorcycle in a better way and the personal attachment with motorcycle will be more.  That's the reason RE suggests to take our own motorcycle for any rides.

locked in a container
Luggage types and split-up:

When we start our ride, our overall luggage will be split into two. 

1. The luggage that we carry on the motorcycle, We call it as “satellite luggage”
    A duffel bag is a good choice. You can fasten it to you motorcycle using bungee cords or luggage straps. Remember to waterproof this bag well as this is exposed to the elements of outside nature whatever terrain you ride.  Packing of this bag is very crucial,  Distribute the weight evenly. If there is some space left in the bag use compression straps to ensure stuff does not move around inside the bag.  Tie the bag after checking its placement thoroughly and do so only on the centre stand. We will end up doing this even at camp sites and finding a flat piece of land could be tricky, use stones to ensure that your motorcycle is as upright as possible when you're fastening your luggage.  It is very tempting to use saddle bags for satellite luggage, but this will leave you with more empty space. Avoid starting the trip with saddlebags on your bike and then shifting them to the luggage vehicle.

What my satellite luggage will definitely have
1. A change of clothes- a pair of denims/cargos, a T-shirt and a casual jacket
2. A hat
3. A pair of running shoes
4. Winter gloves - depending on where we're on the Odyssey
5. Toiletries - I'll have my lip balm/guard and sunscreen
6. GoPro, some mounts, batteries and a power bank
7. a Beanie or a woolen buff
8. a Torch
9. Spare cables and a tube

2. The luggage that is carried in the luggage vehicle.
     This will be minus the riding gear that you bring, as that will be worn by you for the duration of the entire ride.  This luggage will have to be restricted to one piece per rider with a max limit of 15 kilos.  Why 15 kilos? After you have removed all the gear and your satellite luggage, we have found that this is comfortable cut-off. This is also a comfortable weight for you to carry to your rooms, unload/load to the luggage vehicle every day. This luggage will need to be loaded and unloaded every day and in case of rain, the bags can get soiled and wet. It is best that we use some level of waterproofing so as to safeguard what's in the bags. a waterproof cover or waterproofing from the inside could do the job.


Day-0:

Everybody needs to reach two days before the HO trip.  They will book the accommodation you. The very first day I just did a check-in and collected my motorcycles at Delhi.

The next day schedule was as below


Flag Off day and complete Itinerary:

The 13th edition of the Royal Enfield Himalayan Odyssey will flag off from New Delhi on 9th July 2016. This is a call to all those who love to ride on tough and testing terrain and have the passion to ride with RE. In the year 2016 will see 75 riders riding on one of the most spectacular motorcycle journeys in the world. 

Here is our detailed itinerary



Day-1: Delhi To Chandigarh

The first day started as below

  • 5 AM luggage loading - HO
  • 6:30 AM - breakfast
  • 7:15 AM HO start to India gate
Let this begin!

Group photo @INDIA Gate
The first day ride always starts from India Gate, Delhi.  We took a group photo and did some Buddhist rituals and prayed for a safe ride. The briefing includes regroup point, road conditions and some common mistakes committed by riders.


We were 12 people from Karnataka State and grouped together to take some group photos.

Riders joined from Karnataka State

The flag off is done by RE sales director.  The video just after flagoff there are some news channel coverage:  Auto Today  and NDTV




flag-off
Chandigarh, the capital of the northern Indian states of Punjab and Haryana, was designed by the Swiss-French modernist architect, Le Corbusier.  Chandigarh is a city and a union territory in India that serves as the capital of both neighboring states of Haryana and Punjab. The city is not part of either of the two states and is governed directly by the Union Government, which administers all such territories in the country.

Afternoon we reached Chandigarh and checked-in into hotel. The Chandigarh is a very well planned and beautiful city. The city is having lot of tree's and parks. So we did  a quick tour of couple of places in the city.

Day-2: Chandigarh To Manali

In HO every day is a learning day.  You will become much more closer to your motorcycle each day.  In another words you will understand the motorcycle handling in a better way.   The day starts with luggage loading, breakfast, briefing and ride out.  The time which are followed same on each day.

Briefing

The briefing will be about 10-15 min.  This is very important for a rider.  Because the briefing contains about the kind of road you are going to ride on that day and important riding tips.

We reached Manali Highland hotel by 5PM.  We visited local market to purchase  required items fir the ride.  This will be the last city on our onward journey to Leh.  After Manali, the real ride will start. There will be less tarmac and more rough roads.  After Manali you will see all shops in tents till you reach Leh.  I also met couple of cyclist who were cycling up to Leh.

Cyclists @Manali hotel
Manali is a high-altitude Himalayan resort town in India’s northern Himachal Pradesh state. It has a reputation as a backpacking center and honeymoon destination. Set on the Beas River, it’s a gateway for skiing in the Solang Valley and trekking in Parvati Valley. It's also a jumping-off point for paragliding, rafting and mountaineering in the Pir Panjal mountains, home to 4,000m-high Rohtang Pass.

Day-3: Manali To Keylong (Jispa)

The road from Manali to Rothang pass is a single road.  Although it had tarmac, it was not in a good condition.  We took a break at Rothang pass base camp.
Base camp
We started slowly climbing the pass.  I could feel that thin air and altitude change.  My motorcycle was also giving slow response to the throttle.  The machine also need oxygen for the combustion.   The weather on Rothang pass will change every 15 min.  The last leg climb was very foggy and hardly I could see the road.

Rothang Pass roads
After couple of kms it was very sunny and bright.  We were warned not to stay more than 10 min at high altitude region.
Top of Rothang Pass
We just took couple of photos and started descending the Rothang pass. It is good that after crossing the Rothang pass, the road is completely empty and traffic free.  You can only see some Indian Army trucks or some goods carriages on the road. But suddenly, the road becomes too rough, dusty.After travelling few kms on these rough roads my motorcycle started behaving in a weird way.  The headlight, horn and indicators stopped working.  Hence I stopped by to check the problem. Fortunately, there I spotted one of the RE HO trip co-coordinators.  He just did a basic check and identified that a fuse is burned.  In couple of minutes he replaced the fuse which is readily available in the side panel box.  I continued my ride till the lunch break.


Lunch time..:)

Dusty roads on the way to Tandi
At some places the roads were under construction.  Since they had put wet mud with stones, it was very difficult to handle the motorcycle which is of around 200 KG weight.
Road construction
Finally reached Tandi fuel pump. Filled up the tank full, since there will not be any filling station till next 365KM.
Tandi
Tandi gas station
The rough and dusty road continues.  At some places the dust settled on the road might be nearly 10-15 cms too.


We continued to ride and reached Jispa camp.  The river was flowing just behind our tents.  It is really a heaven on the earth.  Very beautiful village.
Jispa camp

Our Tent
We had snacks and had hangout.  Evening onwards it was too cold because of wind and  the cold river was just behind our tents.  I was feeling like I could have taken room instead of tent. That was purely our mistake since we reached early we garbed a tent to stay.

Day-4: Keylong (Jispa) To Sarchu

Jispa is a village in Lahaul, in the Indian state of Himachal Pradesh. Jispa is located 20 km north of Keylong and 7 km south of Darcha, along the Manali-Leh Highway and the Bhaga river. There are approximately 20 villages between Jispa and Keylong.

The briefing we were given instructions on how to do water crossing.  In all the water crossing there will be small pebbles and water very chilled. One should make sure that the motorcycle tires should not get stuck between these small stone beds.
Ready to leave Jispa valley
The distance between Jispa to Sarchu is very less.  It makes difficult to ride on no road terrain. We finished the morning briefing and started riding.
Briefing @Jispa
We crossed couple of water streams before reaching Sarchu. The technique to cross water stream is very simple.  First you should hold your motorcycle tanks tightly with knee's.  Next you have to free your upper body, give focus and look ahead on water flowing road and give the throttle.
Riding beside Bhaga river

Water crossing
Valley view
Lunch break
We had a break for the lunch.  I had some noodles.  You will not get any other kind of foods in these tents other than omlet, noodles, plain rice.

We reached Sarchu very early around 3-4PM.  But after reaching Sarchu with-in 15-20mins the headache started. Almost all had mountain sickness.  Acute Mountain Sickness (AMS) is the mildest form and it's very common. The symptoms can feel like a hangover – dizziness, headache, muscle aches, nausea. Camp doctors checked heartbeats for all affected people.

We were unable to eat anything,  could not sleep or take rest.  Even if we walk for 100mtr we were unable to breath.  That day was a very horrible day which I will never forget in my life.

Sarchu camp
We were having again tented accommodation.  There will be only solar charged lights. There will not be any army hospitals nearby.  After sun goes there will be sudden drop in temperature. It felt like the situation was life threatening.

Sarchu is a major halt point with tented accommodation in the Himalayas on the Leh-Manali Highway, on the boundary between Himachal Pradesh and Ladakh (Jammu and Kashmir) in India. It is situated between Baralacha La to the south and Lachulung La to the north, at an altitude of 4,290 m (14,070 ft).

Day-5: Sarchu To Leh

I was very eager to start from Sarchu. The stay at high altitude and very cold weather I could not get a good sleep.  The RE guys bought petrol (gas) in backup van.  All of us queued up to top up petrol. The stay at Sarchu tent was the most uncomfortable stay.  But it is true once you get acclimatize to Sarchu altitude,  you are more prepared to travel further.
@Patso
I  shifted my satellite luggage to backup van. As the experience speaks, it is very uncomfortable to ride with saddle bags on the motorcycle. After Sarchu the roads are open, no traffic for several kms. I was going alone and stoped to take pictures.  When I reached "Gata Loops" bottom, couple of my friends joined.

GATA Loops begin
Gata Loops is a name that is unknown to everyone except for a few who have traveled on the Manali Leh highway; or planning to do so. It is a series of twenty one hairpin bends that takes you to the top of the 3rd high altitude pass on this highway, Nakeela; at a height of 15,547 ft.
More (Mo-ray) plains
I have covered hundreds of mountain miles but never seen a plateau. When I came upon the More (pronounced ‘mo-ray’) Plains, they were much bigger than what I’d visualized of plateaus from school geography books.
They are endless. Well, 50 km of flatlands at an elevation of 15,000 feet deserves that epithet! And they are flat, for miles after miles, till they run into the surrounding mountains.  Camp here for the evening and you’ll see the most stunning of sunsets. The area is surprisingly active here. You will always have workers building or repairing roads.


We continue the ride towards Leh after taking few pics at More plains.  We were pass through Pang, Meroo, Debring and at Rumtse we had a lunch break.  The Indus river flows parallel to road and other side steep cliff of the mountains.  I remember each mountain had of different colors after Debring. By evening we reached Leh and check-into hotel "Namgyal Palace". 


by Naresh (noreply@blogger.com) at March 03, 2018 10:16

A dream ride on mighty Himalayas - 2016 (Day-6 to Day-10)

Day-6: Leh

Leh, a high-desert city in the Himalayas, is the capital of the Leh region in northern India’s Jammu and Kashmir state. Originally a stop for trading caravans, Leh is now known for its Buddhist sites and nearby trekking areas. Massive 17th-century Leh Palace, modeled on the Dalai Lama’s former home (Tibet’s Potala Palace), overlooks the old town’s bazaar and maze like lanes.

Leh city

Apricot seller 

Vegetable seller

Leh is at an altitude of 3,524 metres (11,562 ft), and is connected via National Highway 1 to Srinagar in the southwest and to Manali in the south via the Leh-Manali Highway. In 2010, Leh was heavily damaged by the sudden floods caused by a cloud burst.

Dry fruits shop

Indian spices seller
Leh was an important stopover on trade routes along the Indus Valley between Tibet to the east, Kashmir to the west and also between India and China for centuries. The main goods carried were salt, grain, pashm or cashmere wool, charas or cannabis resin from the Tarim Basin, indigo, silk yarn and Banaras brocade.

Day-7: Leh To Hunder

This was the day we all were waiting eagerly. Riding to Hunder (Nubra) valley via highest motorable road called "Khardung La" pass.  The pass situated at an elevation of 5602 meters (18379 ft) in the Ladakh region and is 39.7 km from Leh at an altitude of 3,524 metres (11,562 ft).  You can just imagine the steep uphills Journey from Leh to Khardungla, was a painful 3 hours drive up on a winding road.  Khardung La is the highest motorable pass in the world.

Khardungla top

Highest motorable pass ..Yuppie..reached..:)
Best known as the gateway to the Nubra and Shyok valleys in the Ladakh region of Jammu and Kashmir, the Khardung La Pass, commonly pronounced as Khardzong La, is a very important strategic pass into the Siachen glacier.

The pristine air, the scenic beauty one sees all around and the feeling that you are on top of the world has made Khardung La a very popular tourist attraction in the past few years.

 The first 24 km, as far as the South Pullu check point, are paved. From there to the North Pullu check point about 15 km beyond the pass the roadway is primarily loose rock, dirt, and occasional rivulets of snow melt.

Nubra valley is a beautiful place where you can see sand dunes, water, and green apricot tree's.  We are staying at hunder in a tent.  After reaching valley we had hot snacks and went for double humped camel rides.
Nubra river

Sand dunes @Nubra valley

Nubra is mix of all in summer..water, tree, sand dunes, rocks and mountains. But completely frozen for 6 months
We had a campfire and party night.

Party all night..:)
The Siachen glacier water was flowing just beside our tent. The villagers use the flowing water directly. We were just 80kms away from Siachen glacier.

Tents just beside glacier water flow

You can directly drink glacier water

Karnataka state boys outside Royal Camp..Ready to ride out

Day-8: Hunder To Leh

Hundar is a village in the Leh district of Jammu and Kashmir, India. It is located in the Nubra tehsil, on the bank of Shyok River. The Hunder Monastery is located here. Hundar was once the capital of former Nubra kingdom.

Indian Army check post
You can see the Nubra river flowing in the background in the picture below.

Nubra valley view
The Nubra was the last destination of our journey.  Now it was time to start return journey and were headed back to Leh via KhardungLa pass.  I was half way through Khardungla pass it started snowing. Hands almost frozen and the slippery roads were, could not have asked for more 😊. It was a struggling ride up to Khardungla Pass because of low oxygen I could recognize very low response for throttle.

It was fun to ride highest motorable pass in rain and snow
I finally reached the highest motorable road Khardungla pass.  The snow fall had only increased.  Sipping on the lemon tea gave good feeling like never before. We took couple of pictures and started descending. Headache was already hitting back due to high altitude sickness. At couple of places we even faced land slides. When the snow settles down on the mountains the landslide will start automatically because of weight of the snow.

It started raining heavily when we reached south pullu check point.  We took a break and had a lunch. After the rain stopped, we continued our journey and reached Hotel Namgyal Palace in Leh.


Hotel
Day-9: Leh To Debring (Tso Kar)

Today we are riding back towards Debring which is near to Moreplanes. We were staying in a camp near a salt lake called as Tso Kar.  We were also about to touch world's second highest pass called as "Tanglangla". The high altitude, minus temperature and cold wind are pretty common and one needs to gain all the physical and mental strength to withstand them and ride along.

We had a first break and regroup point at place called Rumtse. A small village even by Ladakh standards. Rumtse is the first human settlement on the way from Lahaul to Ladakh after Taglang Pass. It is located 70 km east of Leh and is the starting point for trek to Tso Moriri. Rumtse lies in Rupshu Valley which lies sandwiched between Tibet, Zanskar and Ladakh.
Tea break
The Tanglangla pass is located in the Zanskar range, at the northernmost tip of India, Tanglang La pass is famed as the second highest mountain pass in Leh Ladakh region. It is located at an altitude of around 17000 ft, on the Manali-Leh highway. Characterized by such an altitude, Tanglang La pass is like the gateway to Leh.

The pass provides for a scenic view as it sways away from the main highway. Ample vegetation on both sides further cools the already chilled air and at times, the sharp bends provide just the adrenaline push adventurists crave.

Second highest motorable pass

Second highest pass
 After reaching Moreplanes we had a group photo session.

Ready for group photo

60+ riders lined up for group photo at Moreplanes
Next we continued to ride towards Tso Kar camp site.  There were no roads. It is a very plain area with full of dust and small stones. Approximately after 15kms we reached the camp.  Had evening snacks and tea. We rested at Tso Kar for that night.

Tsokar camp site
It was a nightmare because of -ve temperature and cold windy weather. Early morning we were not able to touch the cold water for brush and bath.  There was no availability of any hot water, since were camped in the middle of no where.You can just see a plain area for miles and miles.

Day-10: Debring (Tso Kar) To Keylong

The distance between Tsokar and Keylong is around 236km but the time taken to cover this distance is around 7+ hours.  The road conditions are very bad.  Hence we just need to focus on the road and try to cover more distance taking less breaks. I just stopped at Moreplanes and took some pics

A view from Moreplanes

Dusty and tested thoroughly..:)
We reached hotel at Keylong by 5PM. It was very chilled weather and beautiful location.  I visited the local city market and purchased items like winter cap, gloves. The local market is very small and the roads are narrow. 

Motorcycles lined up outside Keylong hotel for check-up

Waiting for my turn
There was a fantastic view from out room balcony. We have also completed a round of motorcycle check-up because the next day ride would be very challenging with more water crossings...:)

Tobe continued.....:)

by Naresh (noreply@blogger.com) at March 03, 2018 09:58

March 12, 2018

Marcin Juszkiewicz

Android pisses me off

If you want smart phone then you are limited to Android or iOS. Other options just do not count. iOS philosophy and devices which run it are not something I want to own/use so I am left with Android.

My first Android device was Nokia N900 with Froyo (Android 2.2) based NITdroid. When I saw “K9 mail” on it I knew that Maemo goes to trash (it’s mail client “Modest” worked only in landscape and used font size for visually impaired people). So few weeks later I bought Nexus S. Then Nexus 4. Next was Samsung Galaxy S4 which I won in some contest. Then moved to Nexus 5, LG G3, and now use ZTE Axon 7. Had/have few tablets as well: first some Tegra2 based one with Honeycomb (sold quickly), Archos G9, Nexus 7 (2012) and finally Lenovo S8.

For most of time I tried to run latest possible Android on my devices. Of course non-vendor one cause Android world cares about device for a year (or year and half in best case) and then ignores it. I stopped caring are there any updates to my devices. Sure, they are full of security holes etc but sorry I am not planning to spend few hundred euros every year to replace three phones and tablet.

With Android Oreo (not present for any of my devices) Google announced ‘Project TrelloTreble’ which should fix some of that. I suppose that in 2020 year 40-50% of new devices may support it. With old versions of Android anyway because binary blobs will be too old to keep up with newer releases.

Switching device is the other thing. Doing backups, restoring backups, (re)configuring applications etc. Last time I did factory reset on one of phones it took 2 hours before Google Play Store finished installing applications. Including those I removed half year earlier. Of course forget about text messages or call history. WTF Google?

Backups are fun anyway. Official way is “hope that Google keeps backups of your app settings in a cloud”. Most of apps to do sensible backup require root. Which usually require factory reset to be done first. Or all they do is provide other UI for ‘adb backup’ command (which does some backup and then decides to do nothing for any random amount of time).

ADB itself is a joke. Sure, it can be used to send files over USB connection but it looks like it’s authors live in 90s and all they have is USB 1.1 host controller in their PCs. I can not find other excuse for its speed of 3 MB/s (yes, THREE megabytes per second). Again: WTF?

My current plan is to use my Axon 7 with Nougat for about a year (or two) until it finally die or meet with ground one time too many. And still be pissed off any time related with backups (changing devices in family or sending them for repair).

by Marcin Juszkiewicz at March 03, 2018 15:53

March 11, 2018

Gema Gomez

Azufral Capelet

A few months ago I bought some Berroco Mykonos yarn in San Francisco. I also bought a pattern for it, the Azufral pattern, written by Donna Yacino. Now, after a few months with not a lot of spare time to work on it, I have managed to finish the capelet:

Capelet

The pattern was followed verbatim, adjusting for gauge and measurements of desired garment. The needles used were Knit Pro Symfonie Cubic Square Needles - 30cm (Pair) - 4.00mm, single pointed.

The yarn is Berroco Mykonos (66% linen, 26% nylon, 8% cotton), color aura (8544), handwash in lukewarm water only and lay flat to dry. I hardly ever go for yarn that is not machine washable, but this one was so shiny and nice to the touch that I could not help it.

The fabric looks as follows once finished:

fabric

by Gema Gomez at March 03, 2018 00:00

March 05, 2018

Marcin Juszkiewicz

SnowpenStack in Dublin

During last week I was in Dublin, Ireland on OpenStack PTG. It was also the worst weather since 1982. There was snow and strong wind so conference quickly got renamed to SnowpenStack.

The main reasons for me to be there were:

  • meet all those developers who took some time and looked at my changes
  • discuss some other changes/plans
  • share aarch64 knowledge with OpenStack projects

Conference took place in the Croke Park stadium. We used meeting rooms on 4th, 5th, 6th floors. One day by mistake I took wrong stairs and ended on top of the stadium in just T-shirt… Quickly ran to an elevator to get back to proper floor ;D

The schedule was split into two parts: Monday and Tuesday were for mixed teams sessions while Wednesday — Friday were for discussions in teams. I spoke with Kolla, Nova and Infra teams mostly. There were some discussions with Ironic, Kuryr and some other ones too. Also met several Polish developers so there was a time to speak in native language ;D

On Tuesday I went to the city centre to buy some souvenirs for family (and 99th fridge magnet for myself). Launched Ingress, did one mosaic to see more of a city and after 11 kilometres I was back at the hotel just in time for a small party in GAA museum. And then pub trip with Polish guys. When I finally reached the hotel (about 01:30) there were still discussions in the lobby and I took a part in one of them.

Team discussions started on Wednesday. Visited Nova one summarizing ‘Queens’ release. Turned out that it went better than previous ones. The main problem was lack of reviews — not everyone likes to pester developers on IRC to get some attention to patches. I was asked few times for opinion as I was one of few fresh contributors.

Kolla sessions were a bit chaotic in my opinion. Recently chosen PTL was not present and the person supposed to replace him got stuck at home due to weather. One of the discussions I remember was about Ceph: should we keep using our images or rather move to ‘ceph-ansible’ instead. Final idea was to keep as it looked like there were more cons than pros with moving to ‘ceph-ansible’ images.

Discussed Arm64 support with Infra team. We (Linaro) provided them resources on one of our developer clouds to get aarch64 present in OpenStack CI gates. Turned out that machines work and some initial tests were done. I also got informed that diskimage-builder patches to add GPT/UEFI support will be reviewed soon.

And then there were some weather related issues. On Wednesday every attendee got an email with information that Irish government issued the Red Alert which strongly suggest to stay inside if you do not really have to go outside. And as attendance was not mandatory then people should first check are they comfortable with going to Croke Park (especially those who not stayed in the hotel nearby). Next day organization team announced that the venue we used will close after lunch to make sure that everyone is safe. And the whole conference moved to the hotel…

Imagine developers discussing EVERYWHERE in a hotel. Lobby was occupied by few teams, Infra found a table in library corner, Nova took Neutron and occupied breakfast room. Bar area was quite popular and soon some beers were visible here and there. Few teams went to meeting rooms etc. WiFi bandwidth was gone… Some time later hotel staff created a separate wireless network for our use. And then similar situation was on Friday.

On Wednesday other thing happened too: people started receiving information that their flights are cancelled. There were some connections on Thursday and then nothing was flying on Friday. Kudos to hotel staff to be aware of it — they stopped taking external reservations to make sure that PTG attendees have a place to stay for longer (as some people got rebooked to even Thursday).

And even on Saturday it was hard to get to the airport. No taxi going to the hotel due to snow on a street. But if you walked 500 meters then cab could be hailed on a street. Many people went for buses (line 700 was the only working one). The crowd on the airport was huge. Some of those people looked like they lived there (which was probably true). Several flights were delayed (even by 4-5 hours), other got cancelled but most of them were flying.

Despite the weather sitting in a hotel in Dublin was safe, walking around too as there were about 15-20 centimetres of snow on a street. There were several snowmen around, people had fun playing with snow. But at same time local news were informing that 30 000 homes lacked electricity and some people got stuck in their cars. There was no public transport, no trains, no buses. Much smaller amount of people on streets.

Was it worth attending? Yes. Will I attend next ones? Probably not as this is very developer related where I spend most of my OpenStack time around building it’s components or doing some testing.

by Marcin Juszkiewicz at March 03, 2018 12:36

March 02, 2018

Marcin Juszkiewicz

OpenStack ‘Queens’ release done

OpenStack community released ‘queens’ version this week. IMHO it is quite important moment for AArch64 community as well because it works out of the box for us.

Gone are things like setting hw_firmware_type=uefi for each image you upload to Glance — Nova assumes UEFI to be the default firmware on AArch64 (unless you set the variable to different value for some reason). This simplifies things as users does not have to worry about and we should have less support questions on new setups of Linaro Developer Cloud (which will be based on ‘Queens’ instead of ‘Newton’).

There is a working graphical console if your guest image uses properly configured kernel (4.14 from Debian/stretch-backports works fine, 4.4 from Ubuntu/xenial (used by CirrOS) does not have graphics enabled). Handy feature which we were asked already by some users.

Sad thing is state of live migration on AArch64. It simply does not work through the whole stack (Nova, libvirt, QEMU) because we have no idea what exactly cpu we are running on and how it is compatible with other cpu cores. In theory live migration between same type of processors (like XGene1 -> XGene1) should be possible but we do not have even that level of information available. More information can be found in bug 1430987 reported against libvirt.

Less sad part? We set cpu_model to ‘host-passthrough’ by default now (in Nova) so nevermind which deployment method is used it should work out of the box.

When it comes to building (Kolla) and deploying (Kolla Ansible) most of the changes were done during Pike cycle. During Queens’ one most of the changes were small tweaks here and there. I think that our biggest change was convincing everyone in Kolla(-ansible) to migrate from MariaDB 10.0.x (usually from external repositories) to 10.1.x taken from distribution (Debian) or from RDO.

What will Rocky bring? Better hotplug for PCI Express machines (AArch64/virt, x86/q35 models) is one thing. I hope that live migration stuff situation will improve as well.

by Marcin Juszkiewicz at March 03, 2018 13:22

March 01, 2018

Gema Gomez

OpenStack Queens on ARM64

We are in Dublin this week, at the OpenStack PTG. We happen to be here on a week that has red weather warnings all over Europe, so most of us are stuck in Dublin for longer than we expected.

Queens has been released!

During Pike/Queens my team at Linaro (Software Defined Infrastructure) have been enabling different parts of OpenStack on ARM64 and making sure the OpenStack code is multiarch when necessary (note that I use the terms AArch64 and ARM64 interchangeably).

There seems to be some confusion about the nature of the servers we are using, here is a picture of one of our racks:

servers

Queens is the first release that we feel confident will run out of the box on ARM64, a milestone of collaboration not only from the Linaro member companies but also from the OpenStack community at large. OpenStack projects have been welcoming of the diversity and inclusive, helping us ramp up: either giving direction and reviewing our code or fixing issues themselves.

We will be deploying Queens with Kolla on the Linaro Developer Cloud (ARM64 servers) and documenting the experience for new Kolla users, including brownfield upgrades.

The Linaro Developer Cloud is a collaborative effort of the Linaro Enteprise Group to ensure ARM64 building and testing capabilities are available for different upstream projects, including OpenStack.

This cycle we added resources from one of our clouds to the openstack-infra project so the community can start testing multiarch changes regularly. The bring up of the ARM64 cloud in infra is in progress, there are only 8 executors currently available to run jobs that we’ll be using for experimental jobs for the time being. The long term goal of this effort is to be able to run ARM64 jobs on the gates by default for all projects.

What next? Next steps include running experimental gate jobs for Kolla and any other project that volunteers, ironing out any leftover issues, making sure devstack runs smoothly, incrementally making sure we have a stable platform to run tests on and inviting all OpenStack projects to take part if they are interested. If you want to discuss any specifics or have questions either use the Kolla mailing or reach out to hrw or gema on freenode.

by Gema Gomez at March 03, 2018 00:00

February 21, 2018

Alex Bennée

Workbooks for Benchmarking

While working on a major re-factor of QEMU’s softfloat code I’ve been doing a lot of benchmarking. It can be quite tedious work as you need to be careful you’ve run the correct steps on the correct binaries and keeping notes is important. It is a task that cries out for scripting but that in itself can be a compromise as you end up stitching a pipeline of commands together in something like perl. You may script it all in a language designed for this sort of thing like R but then find your final upload step is a pain to implement.

One solution to this is to use a literate programming workbook like this. Literate programming is a style where you interleave your code with natural prose describing the steps you go through. This is different from simply having well commented code in a source tree. For one thing you do not have to leap around a large code base as everything you need is on the file you are reading, from top to bottom. There are many solutions out there including various python based examples. Of course being a happy Emacs user I use one of its stand-out features org-mode which comes with multi-language org-babel support. This allows me to document my benchmarking while scripting up the steps in a variety of “languages” depending on the my needs at the time. Let’s take a look at the first section:

1 Binaries To Test

Here we have several tables of binaries to test. We refer to the
current benchmarking set from the next stage, Run Benchmark.

For a final test we might compare the system QEMU with a reference
build as well as our current build.

Binary title
/usr/bin/qemu-aarch64 system-2.5.log
~/lsrc/qemu/qemu-builddirs/arm-targets.build/aarch64-linux-user/qemu-aarch64 master.log
~/lsrc/qemu/qemu.git/aarch64-linux-user/qemu-aarch64 softfloat-v4.log

Well that is certainly fairly explanatory. These are named org-mode tables which can be referred to in other code snippets and passed in as variables. So the next job is to run the benchmark itself:

2 Run Benchmark

This runs the benchmark against each binary we have selected above.

    import subprocess
    import os

    runs=[]

    for qemu,logname in files:
    cmd="taskset -c 0 %s ./vector-benchmark -n %s | tee %s" % (qemu, tests, logname)
        subprocess.call(cmd, shell=True)
        runs.append(logname)

        return runs
        

So why use python as the test runner? Well truth is whenever I end up munging arrays in shell script I forget the syntax and end up jumping through all sorts of hoops. Easier just to have some simple python. I use python again later to read the data back into an org-table so I can pass it to the next step, graphing:

set title "Vector Benchmark Results (lower is better)"
set style data histograms
set style fill solid 1.0 border lt -1

set xtics rotate by 90 right
set yrange [:]
set xlabel noenhanced
set ylabel "nsecs/Kop" noenhanced
set xtics noenhanced
set ytics noenhanced
set boxwidth 1
set xtics format ""
set xtics scale 0
set grid ytics
set term pngcairo size 1200,500

plot for [i=2:5] data using i:xtic(1) title columnhead

This is a GNU Plot script which takes the data and plots an image from it. org-mode takes care of the details of marshalling the table data into GNU Plot so all this script is really concerned with is setting styles and titles. The language is capable of some fairly advanced stuff but I could always pre-process the data with something else if I needed to.

Finally I need to upload my graph to an image hosting service to share with my colleges. This can be done with a elaborate curl command but I have another trick at my disposal thanks to the excellent restclient-mode. This mode is actually designed for interactive debugging of REST APIs but it is also easily to use from an org-mode source block. So the whole thing looks like a HTTP session:

:client_id = feedbeef

# Upload images to imgur
POST https://api.imgur.com/3/image
Authorization: Client-ID :client_id
Content-type: image/png

< benchmark.png

Finally because the above dumps all the headers when run (which is very handy for debugging) I actually only want the URL in most cases. I can do this simply enough in elisp:

#+name: post-to-imgur
#+begin_src emacs-lisp :var json-string=upload-to-imgur()
  (when (string-match
         (rx "link" (one-or-more (any "\":" whitespace))
             (group (one-or-more (not (any "\"")))))
         json-string)
    (match-string 1 json-string))
#+end_src

The :var line calls the restclient-mode function automatically and passes it the result which it can then extract the final URL from.

And there you have it, my entire benchmarking workflow document in a single file which I can read through tweaking each step as I go. This isn’t the first time I’ve done this sort of thing. As I use org-mode extensively as a logbook to keep track of my upstream work I’ve slowly grown a series of scripts for common tasks. For example every patch series and pull request I post is done via org. I keep the whole thing in a git repository so each time I finish a sequence I can commit the results into the repository as a permanent record of what steps I ran.

If you want even more inspiration I suggest you look at John Kitchen’s scimax work. As a publishing scientist he makes extensive use of org-mode when writing his papers. He is able to include the main prose with the code to plot the graphs and tables in a single source document from which his camera ready documents are generated. Should he ever need to reproduce any work his exact steps are all there in the source document. Yet another example of why org-mode is awesome 😉

by Alex at February 02, 2018 20:34

February 19, 2018

Marcin Juszkiewicz

Hotplug in VM. Easy to say…

You run VM instance. Nevermind is it part of OpenStack setup or just local one started using Boxes, virt-manager, virsh or other that kind of fronted to libvirt daemon. And then you want to add some virtual hardware to it. And another card and one more controller…

Easy to imagine scenario, right? What can go wrong, you say? “No more available PCI slots.” message can happen. On second/third card/controller… But how? Why?

Like I wrote in one of my previous posts most of VM instances are 90s pc hardware virtual boxes. With simple PCI bus which accepts several cards to be added/removed at any moment.

But not on AArch64 architecture. Nor on x86-64 with Q35 machine type. What is a difference? Both are PCI Express machines. And by default they have far too small amount of pcie slots (called pcie-root-port in qemu/libvirt language). More about PCI Express support can be found in PCI topology and hotplug page of libvirt documentation.

So I wrote a patch to Nova to make sure that enough slots will be available. And then started testing. Tried few different approaches, discussed with upstream libvirt developers about ways of solving the problem and finally we selected the one and only proper way of doing it. Then discussed failures with UEFI developers. And went for help to Qemu authors. And explained what I want to achieve and why to everyone in each of those four projects. At some point I had seen pcie-root-port things everywhere…

Turned out that the method of fixing it is kind of simple: we have to create whole pcie structure with root port and slots. This tells libvirt to not try any automatic adding of slots (which may be tricky if not configured properly as you may end with too small amount of slots for basic addons).

Then I went with idea of using insane values. VM with one hundred PCIe slots? Sure. So I made one, booted it and then something weird happen: landed in UEFI shell instead of getting system booted. Why? How? Where is my storage? Network? Etc?

Turns out that Qemu has limits. And libvirt has limits… All ports/slots went into one bus and memory for MMCONFIG and/or I/O space was gone. There are two interesting threads about it on qemu-devel mailing list.

So I added magic number into my patch: 28 — this amount of pcie-root-port entries in my aarch64 VM instance was giving me bootable system. Have to check it on x86-64/q35 setup still but it should be more or less the same. I expect this patch to land in ‘Rocky’ (the next OpenStack release) and probably will have to find a way to get it into ‘Queens’ as well because this is what we are planning to use for next edition of Linaro Developer Cloud.

Conclusion? Hotplug may be complicated. But issues with it can be solved.

by Marcin Juszkiewicz at February 02, 2018 18:06

February 15, 2018

Marcin Juszkiewicz

One-hit wonders and their other hits

There are so many musical bands and signers that not every one can get popular. Sometimes they are popular in their country/region but not necessary worldwide. Or they get one good song and nothing else gets such popularity. So called ‘one hit wonders’.

One of my friends recently shared “one hit wonders” playlist. But as it is with all those lists created during parties it contained several false entries which rather shown that someone did not know other hits for some bands. Anyway it was interesting enough to play in background.

Music was playing, letters were scrolling in terminal so I took a bit of time and created something more fancy: playlist with less known hits of ‘one-hit wonders’.

Sure, there are many missing entries and that some of listed artists/bands were more popular here and there. I am open for suggestions ;D

by Marcin Juszkiewicz at February 02, 2018 08:09

February 13, 2018

Riku Voipio

Making sense of /proc/cpuinfo on ARM

Ever stared at output of /proc/cpuinfo and wondered what the CPU is?

...
processor : 7
BogoMIPS : 2.40
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
CPU implementer : 0x41
CPU architecture: 8
CPU variant : 0x0
CPU part : 0xd03
CPU revision : 3
Or maybe like:

$ cat /proc/cpuinfo
processor : 0
model name : ARMv7 Processor rev 2 (v7l)
BogoMIPS : 50.00
Features : half thumb fastmult vfp edsp thumbee vfpv3 tls idiva idivt vfpd32 lpae
CPU implementer : 0x56
CPU architecture: 7
CPU variant : 0x2
CPU part : 0x584
CPU revision : 2
...
The bits "CPU implementer" and "CPU part" could be mapped to human understandable strings. But the Kernel developers are heavily against the idea. Therefor, to the next idea: Parse in userspace. Turns out, there is a common tool almost everyone has installed does similar stuff. lscpu(1) from util-linux. So I proposed a patch to do ID mapping on arm/arm64 to util-linux, and it was accepted! So using lscpu from util-linux 2.32 (hopefully to be released soon) the above two systems look like:

Architecture: aarch64
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 2
NUMA node(s): 1
Vendor ID: ARM
Model: 3
Model name: Cortex-A53
Stepping: r0p3
CPU max MHz: 1200.0000
CPU min MHz: 208.0000
BogoMIPS: 2.40
L1d cache: unknown size
L1i cache: unknown size
L2 cache: unknown size
NUMA node0 CPU(s): 0-7
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
And

$ lscpu
Architecture: armv7l
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Vendor ID: Marvell
Model: 2
Model name: PJ4B-MP
Stepping: 0x2
CPU max MHz: 1333.0000
CPU min MHz: 666.5000
BogoMIPS: 50.00
Flags: half thumb fastmult vfp edsp thumbee vfpv3 tls idiva idivt vfpd32 lpae
As we can see, lscpu is quite versatile and can show more information than just what is available in cpuinfo.

by Riku Voipio (noreply@blogger.com) at February 02, 2018 14:33

February 11, 2018

Siddhesh Poyarekar

Optimizing toolchains for modern microprocessors

About 2.5 years ago I left Red Hat to join Linaro in a move that surprised even me for the first few months. I still work on the GNU toolchain with a glibc focus, but my focus changed considerably. I am no longer looking at the toolchain in its entirety (although I do that on my own time whenever I can, either as glibc release manager or reviewer); my focus is making glibc routines faster for one specific server microprocessor; no prizes for guessing which processor that is. I have read architecture manuals in the past to understand specific behaviours but this is the first time that I have had to pore through the entire manual and optimization guides and try and eek out the last cycle of performance from a chip.

This post is an attempt to document my learnings and make a high level guide of the various things me and my team looked at to improve performance of the toolchain. Note that my team is continuing to work on this chip (and I continue to learn new techniques, I may write about it later) so this ‘guide’ is more of a personal journey. I may add more follow ups or modify this post to reflect any changes in my understanding of this vast topic.

All of my examples use ARM64 assembly since that’s what I’ve been working on and translating the examples to something x86 would have discouraged me enough to not write this at all.

What am I optimizing for?

CPUs today are complicated beasts. Vulnerabilities like Spectre allude to how complicated CPU behaviour can get but in reality it can get a lot more complicated and there’s never really a universal solution to get the best out of them. Due to this, it is important to figure out what the end goal for the optimization is. For string functions for example, there are a number of different factors in play and there is no single set of behaviours that trumps over all others. For compilers in general, the number of such combinations is even higher. The solution often is to try and ensure that there is a balance and there are no exponentially worse behaviours.

The first line of defence for this is to ensure that the algorithm used for the routine does not exhibit exponential behaviour. I wrote about algorithmic changes I did to the multiple precision fallback implementation in glibc years ago elsewhere so I’m not going to repeat that. I will however state that the first line of attack to improve any function must be algorithmic. Thankfully barring strcmp, string routines in glibc had a fairly sound algorithmic base. strcmp fall back to a byte comparison when inputs are not mutually aligned, which is now fixed.

Large strings vs small

This is one question that gets asked very often in the context of string functions and different developers have different opinions on it, some differences even leading to flamewars in the past. One popular approach to ‘solving’ this is to quote usage of string functions in a popular benchmark and use that as a measuring stick. For a benchmark like CPU2006 or CPU2017, it means that you optimize for smaller strings because the number of calls to smaller strings is very high in those benchmarks. There are a few issues to that approach:

  • These benchmarks use glibc routines for a very small fraction of time, so you’re not going to win a lot of performance in the benchmark by improving small string performance
  • Small string operations have other factors affecting it a lot more, i.e. things like cache locality, branch predictor behaviour, prefether behaviour, etc. So while it might be fun to tweak behaviour exactly the way a CPU likes it, it may not end up resulting in the kind of gains you’re looking for
  • A 10K string (in theory) takes at least 10 times more cycles than a 1K string, often more. So effectively, there is 10x more incentive to look at improving performance of larger strings than smaller ones.
  • There are CPU features specifically catered for larger sequential string operations and utilizing those microarchitecture quirks will guarantee much better gains
  • There are a significant number of use cases outside of these benchmarks that use glibc far more than the SPEC benchmarks. There’s no established set of benchmarks that represent them though.

I won’t conclude with a final answer for this because there is none. This is also why I had to revisit this question for every single routine I targeted, sometimes even before I decide to target it.

Cached or not?

This is another question that comes up for string routines and the answer is actually a spectrum - a string could be cached, not cached or partially cached. What’s the safe assumption then?

There is a bit more consensus on the answer to this question. It is generally considered safe to consider that shorter string accesses are cached and then focus on code scheduling and layout for its target code. If the string is not cached, the cost of getting it into cache far outweighs the savings through scheduling and hence it is pointless looking at that case. For larger strings, assuming that they’re cached does not make sense due to their size. As a result, the focus for such situations should be on ensuring that cache utilization is optimal. That is, make sure that the code aids all of the CPU units that populate caches, either through a hardware prefetcher or through judiciously placed software prefetch instructions or by avoiding caching altogether, thus avoiding evicting other hot data. Code scheduling, alignment, etc. is still important because more often than not you’ll have a hot loop that does the loads, compares, stores, etc. and once your stream is primed, you need to ensure that the loop is not suboptimal and runs without stalls.

My branch is more important than yours

Branch predictor units in CPUs are quite complicated and the compiler does not try to model them. Instead, it tries to do the simpler and more effective thing; make sure that the more probably branch target is accessible through sequential fetching. This is another aspect of the large strings vs small for string functions and more often than not, smaller sizes are assumed to be more probable for hand-written assembly because it seems to be that way in practice and also the cost of a mispredict hits the smaller size more than it does the larger one.

Don’t waste any part of a pig CPU

CPUs today are complicated beasts. Yes I know I started the previous section with this exact same line; they’re complicated enough to bear repeating that. However, there is a bit of relief in the fact that the first principles of their design hasn’t changed much. The components of the CPU are all things we heard about in our CS class and the problem then reduces to understanding specific quirks of the processor core. At a very high level, there are three types of quirks you look for:

  1. Something the core does exceedingly well
  2. Something the core does very badly
  3. Something the core does very well or badly under specific conditions

Typically this is made easy by CPU vendors when they provide documentation that specifies a lot of this information. Then there are cases where you discover these behaviours through profiling. Oh yes, before I forget:

Learn how to use perf or similar tool and read its output it will save your life

For example, the falkor core does something interesting with respect with loads and addressing modes. Typically, a load instruction would take a specific number of cycles to fetch from L1, more if memory is not cached, but that’s not relevant here. If you issue a load instruction with a pre/post-incrementing addressing mode, the microarchitecture issues two micro-instructions; one load and another that updates the base address. So:

   ldr  x1, [x2, 16]!

effectively is:

  ldr   x1, [x2, 16]
  add   x2, x2, 16

and that increases the net cost of the load. While it saves us an instruction, this addressing mode isn’t always preferred in unrolled loops since you could avoid the base address increment at the end of every instruction and do that at the end. With falkor however, this operation is very fast and in most cases, this addressing mode is preferred for loads. The reason for this is the way its hardware prefetcher works.

Hardware Prefetcher

A hardware prefetcher is a CPU unit that speculatively loads the memory location after the location requested, in an attempt to speed things up. This forms a memory stream and larger the string, the more its gains from prefetching. This however also means that in case of multiple prefetcher units in a core, one must ensure that the same prefetcher unit is hit so that the unit gets trained properly, i.e. knows what’s the next block to fetch. The way a prefetcher typically knows is if sees a consistent stride in memory access, i.e. it sees loads of X, X+16, X+32, etc. in a sequence.

On falkor the addressing mode plays an important role in determining which hardware prefetcher unit is hit by the load and effectively, a pre/post-incrementing load ensures that the loads hit the same prefetcher. That combined with a feature called register renaming ensures that it is much quicker to just fetch into the same virtual register and pre/post-increment the base address than to second-guess the CPU and try to outsmart it. The memcpy and memmove routines use this quirk extensively; comments in the falkor routines even have detailed comments explaining the basis of this behaviour.

Doing something so badly that it is easier to win

A colleague once said that the best targets for toolchain optimizations are CPUs that do things badly. There always is this one behaviour or set of behaviours that CPU designers decided to sacrifice to benefit other behaviours. On falkor for example, calling the MRS instruction for some registers is painfully slow whereas it is close to single cycle latency for most other processors. Simply avoiding such slow paths in itself could result in tremendous performance wins; this was evident with the memset function for falkor, which became twice as fast for medium sized strings.

Another example for this is in the compiler and not glibc, where the fact that using a ‘str’ instruction on 128-bit registers with register addressing mode is very slow on falkor. Simply avoiding that instruction altogether results in pretty good gains.

CPU Pipeline

Both gcc and llvm allow you to specify a model of the CPU pipeline, i.e.

  1. The number of each type of unit the CPU has. That is, the number of load/store units, number of integer math units, number of FP units, etc.
  2. The latency for each type of instruction
  3. The number of micro-operations each instruction splits into
  4. The number of instructions the CPU can fetch/dispatch in a single cycle

and so on. This information is then used to sequence instructions in a function that it optimizes for. This may also help the compiler choose between instructions based on how long those take. For example, it may be cheaper to just declare a literal in the code and load from it than to construct a constant using mov/movk. Similarly, it could be cheaper to use csel to select a value to load to a register than to branch to a different piece of code that loads the register or vice versa.

Optimal instruction sequencing can often result in significant gains. For example, intespersing load and store instructions with unrelated arithmetic instructions could result in both those instructions executing in parallel, thus saving time. On the contrary, sequencing multiple load instructions back to back could result in other units being underutilized and all instructions being serialized on to the load unit. The pipeline model allows the compiler to make an optimal decision in this regard.

Vector unit - to use or not to use, that is the question

The vector unit is this temptress that promises to double your execution rate, but it doesn’t come without cost. The most important cost is that of moving data between general purpose and vector registers and quite often this may end up eating into your gains. The cost of the vector instructions themselves may be high, or a CPU might have multiple integer units and just one SIMD unit, because of which code may get a better schedule when executed on the integer units as opposed to via the vector unit.

I had seen an opposite example of this in powerpc years ago when I noticed that much of the integer operations were also implemented in FP in multiple precision math. This was because the original authors were from IBM and they had noticed a significant performance gain with that on powerpc (possible power7 or earlier given the timelines) because the CPU had 4 FP units!

Final Thoughts

This is really just the tip of the iceberg when it comes to performance optimization in toolchains and utilizing CPU quirks. There are more behaviours that could be exploited (such as aliasing behaviour in branch prediction or core topology) but the cost benefit of doing that is questionable.

Despite how much fun it is to hand-write assembly for such routines, the best approach is always to write simple enough code (yes, clever tricks might actually defeat compiler optimization passes!) that the compiler can optimize for you. If there are missed optimizations, improve compiler support for it. For glibc and aarch64, there is also the case of impending multiarch explosion. Due to the presence of multiple vendors, having a perfectly tuned routine for each vendor may pose code maintenance problems and also secondary issues with performance, like code layout in a binary and instruction cache utilization. There are some random ideas floating about for that already, like making separate text sections for vendor-specific code, but that’s something we would like to avoid doing if we can.

by Siddhesh at February 02, 2018 19:37

February 06, 2018

Alex Bennée

FOSDEM 2018

I’ve just returned from a weekend in Brussels for my first ever FOSDEM – the Free and Open Source Developers, European Meeting. It’s been on my list of conferences to go to for some time and thanks to getting my talk accepted, my employer financed the cost of travel and hotels. Thanks to the support of the Université libre de Bruxelles (ULB) the event itself is free and run entirely by volunteers. As you can expect from the name they also have a strong commitment to free and open source software.

The first thing that struck me about the conference is how wide ranging it was. There were talks on everything from the internals of debugging tools to developing public policy. When I first loaded up their excellent companion app (naturally via the F-Droid repository) I was somewhat overwhelmed by the choice. As it is a free conference there is no limit on the numbers who can attend which means you are not always guarenteed to be able to get into every talk. In fact during the event I walked past many long queues for the more popular talks. In the end I ended up just bookmarking all the talks I was interested in and deciding which one to go to depending on how I felt at the time. Fortunately FOSDEM have a strong archiving policy and video most of their talks so I’ll be spending the next few weeks catching up on the ones I missed.

There now follows a non-exhaustive list of the most interesting ones I was able to see live:

Dashamir’s talk on EasyGPG dealt with the opinionated decisions it makes to try and make the use of GnuPG more intuitive to those not versed in the full gory details of public key cryptography. Although I use GPG mainly for signing GIT pull requests I really should make better use it over all. The split-key solution to backups was particularly interesting. I suspect I’ll need a little convincing before I put part of my key in the cloud but I’ll certainly check out his scripts.

Liam’s A Circuit Less Travelled was an entertaining tour of some of the technologies and ideas from early computer history that got abandoned on the wayside. These ideas were often to be re-invented in a less superior form as engineers realised the error of their ways as technology advanced. The later half of the talk turns into a bit of LISP love-fest but as an Emacs user with an ever growing config file that is fine by me 😉

Following on in the history vein was Steven Goodwin’s talk on Digital Archaeology which was a salutatory reminder of the amount of recent history that is getting lost as computing’s breakneck pace has discarded old physical formats in lieu of newer equally short lived formats. It reminded me I should really do something about the 3 boxes of floppy disks I have under my desk. I also need to schedule a visit to the Computer History Museum with my children seeing as it is more or less on my doorstep.

There was a tongue in check preview that described the EDSAC talk as recreating “an ancient computer without any of the things that made it interesting”. This was was a little unkind. Although the project re-implemented the computation parts in a tiny little FPGA the core idea was to introduce potential students to the physicality of the early computers. After an introduction to the hoary architecture of the original EDSAC and the Wheeler Jump Mary introduced the hardware they re-imagined for the project. The first was an optical reader developed to read in paper tapes although this time ones printed on thermal receipt paper. This included an in-depth review of the problems of smoothing out analogue inputs to get reliable signals from their optical sensors which mirrors the problems the rebuild is facing with nature of the valves used in EDSAC. It is a shame they couldn’t come up with some way to involve a valve but I guess high-tension supplies and school kids don’t mix well. However they did come up with a way of re-creating the original acoustic mercury delay lines but this time with a tube of air and some 3D printed parabolic ends.

The big geek event was the much anticipated announcement of RISC-V hardware during the RISC-V enablement talk. It seemed to be an open secret the announcement was coming but it still garnered hearty applause when it finally came. I should point out I’m indirectly employed by companies with an interest in a competing architecture but it is still good to see other stuff out there. The board is fairly open but there are still some peripheral IPs which were closed which shows just how tricky getting to fully-free hardware is going to be. As I understand the RISC-V’s licensing model the ISA is open (unlike for example an ARM Architecture License) but individual companies can still have closed implementations which they license to be manufactured which is how I assume SiFive funds development. The actual CPU implementation is still very much a black box you have to take on trust.

Finally for those that are interested my talk is already online for those that are interested in what I’m currently working on. The slides have been slightly cropped in the video but if you follow the link to the HTML version you can read along on your machine.

I have to say FOSDEM’s setup is pretty impressive. Although there was a volunteer in each room to deal with fire safety and replace microphones all the recording is fully automated. There are rather fancy hand crafted wooden boxes in each room which take the feed from your laptop and mux it with the camera. I got the email from the automated system asking me to review a preview of my talk about half and hour after I gave it. It took a little longer for the final product to get encoded and online but it’s certainly the nicest system I’ve come across so far.

All in all I can heartily recommend FOSDEM for anyone in an interest is FLOSS. It’s a packed schedule and there is going to be something for everyone there. Big thanks to all the volunteers and organisers and I hope I can make it next year 😉

by Alex at February 02, 2018 09:36

January 23, 2018

Leif Lindholm

Fun and games with gnu-efi

gnu-efi is a set of scripts, libraries, header files and code examples to make it possible to write applications and drivers for the UEFI environment directly from your POSIX world. It supports i386, Ia64, X64, ARM and AArch64 targets ... but it would be dishonest to say it is beginner friendly in its current state. So let's do something about that.

Rough Edges

gnu-efi comes packaged for most Linux distributions, so you can simply run

$ sudo apt-get install gnu-efi

or

$ sudo dnf install gnu-efi gnu-efi-devel

to install it. However, there is a bunch of Makefile boilerplate that is not covered by said packaging, meaning that getting from "hey, let's check this thing out" to "hello, world" involves a fair bit of tedious makefile hacking.

... serrated?

Also, the whole packaging story here is a bit ... special. It means installing headers and libraries into /usr/lib and /usr/include solely for the inclusion into images to be executed by the UEFI firmware during Boot Services, before the operating system is running. And don't get me started on multi-arch support.

Simplification

Like most other programming languages, Make supports including other source files into the current context. The gnu-efi codebase makes use of this, but not in a way that's useful to a packaging system.

Now, at least GNU Make looks in /usr/include and /usr/local/include as well as the current working directory and any directories specified on the command line with -L. This means we can stuff most of the boilerplate in makefile fragments and include where we need them.

Hello World

So, let's start with the (almost) most trivial application imaginable:

#include <efi/efi.h>
#include <efi/efilib.h>

EFI_STATUS
efi_main(
    EFI_HANDLE image_handle,
    EFI_SYSTEM_TABLE *systab
    )
{
    InitializeLib(image_handle, systab);

    Print(L"Hello, world!\n");

    return EFI_SUCCESS;
}

Save that as hello.c.

Reducing the boiler-plate

Now grab Make.defaults and Make.rules from the gnu-efi source directory and stick them in a subdirectory called efi/.

Then download this gnuefi.mk I prepared earlier, and include it in your Makefile:

include gnuefi.mk

ifeq ($(HAVE_EFI_OBJCOPY), y)
FORMAT := --target efi-app-$(ARCH)      # Boot time application
#FORMAT := --target efi-bsdrv-$(ARCH)   # Boot services driver
#FORMAT := --target efi-rtdrv-$(ARCH)   # Runtime driver
else
SUBSYSTEM=$(EFI_SUBSYSTEM_APPLICATION)  # Boot time application
#SUBSYSTEM=$(EFI_SUBSYSTEM_BSDRIVER)    # Boot services driver
#SUBSYSTEM=$(EFI_SUBSYSTEM_RTDRIVER)    # Runtime driver
endif

all: hello.efi

clean:
    rm -f *.o *.so *.efi *~

The hello.efi dependency for the all target invokes implicit rules (defined in Make.rules) to generate hello.efi from hello.so, which is generated by an implicit rule from hello.o, which is generated by an implicit rule from hello.c.

NOTE: there are two bits of boiler-plate that still need addressing.

First of all, in gnuefi.mk, GNUEFI_LIBDIR needs to be manually adjusted to fit the layout implemented by your distribution. Template entries for Debian and Fedora are provided.

Secondly, the bit of boiler-plate we cannot easily get rid of - we need to inform the toolchain about whether the desired output is an application, a boot-time driver or a runtime driver. Templates for this is included in the Makefile snippet above - but note that different options must currently be set for toolchains where objcopy supports efi- targets directly and ones where it does not.

Building and running

Once the build environment has been completed, build the project as you would with any regular codebase.

$ make
gcc -I/usr/include/efi -I/usr/include/efi/x86_64 -I/usr/include/protocol -mno-red-zone -fpic  -g -O2 -Wall -Wextra -Werror -fshort-wchar -fno-strict-aliasing -fno-merge-constants -ffreestanding -fno-stack-protector -fno-stack-check -DCONFIG_x86_64 -DGNU_EFI_USE_MS_ABI -maccumulate-outgoing-args --std=c11 -c hello.c -o hello.o
ld -nostdlib --warn-common --no-undefined --fatal-warnings --build-id=sha1 -shared -Bsymbolic /usr/lib/crt0-efi-x86_64.o -L /usr/lib64 -L /usr/lib /usr/lib/gcc/x86_64-linux-gnu/6/libgcc.a -T /usr/lib/elf_x86_64_efi.lds hello.o -o hello.so -lefi -lgnuefi
objcopy -j .text -j .sdata -j .data -j .dynamic -j .dynsym -j .rel \
        -j .rela -j .rel.* -j .rela.* -j .rel* -j .rela* \
        -j .reloc --target efi-app-x86_64       hello.so hello.efi
rm hello.o hello.so
$ 

Then get the resulting application (hello.efi) over to a filesystem accessible from UEFI and run it.

UEFI Interactive Shell v2.2
EDK II
UEFI v2.60 (EDK II, 0x00010000)
Mapping table
FS0: Alias(s):HD1a1:;BLK3:
     PciRoot(0x0)/Pci(0x1,0x1)/Ata(0x0)/HD(1,MBR,0xBE1AFDFA,0x3F,0xFBFC1)
BLK2: Alias(s):
     PciRoot(0x0)/Pci(0x1,0x1)/Ata(0x0)
BLK4: Alias(s):
     PciRoot(0x0)/Pci(0x1,0x1)/Ata(0x0)
BLK0: Alias(s):
     PciRoot(0x0)/Pci(0x1,0x0)/Floppy(0x0)
BLK1: Alias(s):
     PciRoot(0x0)/Pci(0x1,0x0)/Floppy(0x1)
Press ESC in 5 seconds to skip startup.nsh or any other key to continue.
Shell> fs0:
FS0:\> hello
Hello, world!
FS0:\>

Wohoo, it worked! (I hope.)

Summary

gnu-efi provides a way to easily develop drivers and applications for UEFI inside your POSIX environment, but it comes with some unnecessarily rough edges. Hopefully this post makes it easier for you to get started with developing real applications and drivers using gnu-efi quickly.

Clearly, we should be working towards getting this sort of thing included in upstream and installed with distribution packages.

by Leif Lindholm at January 01, 2018 16:07

Ard Biesheuvel

Per-task stack canaries for arm64

Due to the way the stack of a thread (or task in kernelspeak) is shared between control flow data (frame pointer, return address, caller saved registers) and temporary buffers, overflowing such buffers can completely subvert the control flow of a program, and the stack is therefore a primary target for attacks. Such attacks are referred to as Return Oriented Programming (ROP), and typically consist of a specially crafted array of forged stack frames, where each return from a function is directed at another piece of code (called a gadget) that is already present in the program. By piecing together gadgets like this, powerful attacks can be mounted, especially in a big program such as the kernel where the supply of gadgets is endless.

One way to mitigate such attacks is the use of stack canaries, which are known values that are placed inside each stack frame when entering a function, and checked again when leaving the function. This forces the attacker to craft his buffer overflow attack in a way that puts the correct stack canary value inside each stack frame. That by itself is rather trivial, but it does require the attacker to discover the value first.

GCC support

GCC implements support for stack canaries, which can be enabled using the various ‑fstack-protector[‑xxx] command line switches. When enabled, each function prologue will store the value of the global variable __stack_chk_guard inside the stack frame, and each epilogue will read the value back and compare it, and branch to the function __stack_chk_fail if the comparison fails.

This works fine for user programs, with the caveat that all threads will use the same value for the stack canary. However, each program will pick a random value at program start, and so this is not a severe limitation. Similarly, for uniprocessor (UP) kernels, where only a single task will be active at the same time, we can simply update the value of the __stack_chk_guard variable when switching from one task to the next, and so each task can have its own unique value.

However, on SMP kernels, this model breaks down. Each CPU will be running a different task, and so any combination of tasks could be active at the same time. Since each will refer to __stack_chk_guard directly, its value cannot be changed until all tasks have exited, which only occurs at a reboot. Given that servers don’t usually reboot that often, leaking the global stack canary value can seriously compromise security of a running system, as the attacker only has to discover it once.

x86: per-CPU variables

To work around this issue, Linux/x86 implements support for stack canaries using the existing Thread-local Storage (TLS) support in GCC, which replaces the reference to __stack_chk_guard with a reference to a fixed offset in the TLS block. This means each CPU has its own copy, which is set to the stack canary value of that CPU’s current task when it switches to it. When the task migrates, it just takes its stack canary value along, and so all tasks can use a unique value. Problem solved.

On arm64, we are not that lucky, unfortunately. GCC only supports the global stack canary value, although discussions are underway to decide how this is best implemented for multitask/thread environments, i.e., in a way that works for userland as well as for the kernel.

Per-CPU variables and preemption

Loading the per-CPU version of __stack_chk_guard could look something like this on arm64:

adrp    x0, __stack__chk_guard
add     x0, x0, :lo12:__stack_chk_guard
mrs     x1, tpidr_el1
ldr     x0, [x0, x1]

There are two problems with this code:

  • the arm64 Linux kernel implements support for Virtualization Host Extensions (VHE), and uses code patching to replace all references to tpidr_el1 with tpidr_el2 on VHE capable systems,
  • the access is not atomic: if this code is preempted after reading the value of tpidr_el1 but before loading the stack canary value, and is subsequently migrated to another CPU, it will load the wrong value.

In kernel code, we can deal with this easily: every emitted reference to tpidr_el1 is tagged so we can patch it at boot, and on preemptible kernels we put the code in a non-preemtible block to make it atomic. However, this is impossible to do in GCC generated code without putting elaborate knowledge of the kernel’s per-CPU variable implementation into the compiler, and doing so would severely limit our future ability to make any changes to it.

One way to mitigate this would be to reserve a general purpose register for the per-CPU offset, and ensure that it is used as the offset register in the ldr instruction. This addresses both problems: we use the same register regardless of VHE, and the single ldr instruction is atomic by definition.

However, as it turns out, we can do much better than this. We don’t need per-CPU variables if we can load the task’s stack canary value directly, and each CPU already keeps a pointer to the task_struct of the current task in system register sp_el0. So if we replace the above with

movz    x0, :abs_g0:__stack__chk_guard_offset
mrs     x1, sp_el0
ldr     x0, [x0, x1]

we dodge both issues, since all of the values involved are per-task values which do not change when migrating to another CPU. Note that the same sequence could be used in userland for TLS if you swap out sp_el0 for tpidr_el0 (and use the appropriate relocation type), so adding support for this to GCC (with a command line configurable value for the system register) would be a flexible solution to this problem.

Proof of concept implementation

I implemented support for the above, using a GCC plugin to replace the default sequence

adrp    x0, __stack__chk_guard
add     x0, x0, :lo12:__stack_chk_guard
ldr     x0, [x0]

with

mrs     x0, sp_el0
add     x0, x0, :lo12:__stack_chk_guard_offset
ldr     x0, [x0]

This limits __stack_chk_guard_offset to 4 KB, but this is not an issue in practice unless struct randomization is enabled. Another caveat is that it only works with GCC’s small code model (the one that uses adrp instructions) since the plugin works by looking for those instructions and replacing them.

Code can be found here.

by ardbiesheuvel at January 01, 2018 11:12

January 17, 2018

Alex Bennée

Edit with Emacs v1.15 released

After a bit of hiatus there was enough of a flurry of patches to make it worth pushing out a new release. I’m in a little bit of a quandary with what to do with this package now. It’s obviously a useful extension for a good number of people but I notice the slowly growing number of issues which I’m not making much progress on. It’s hard to find time to debug and fix things when it’s main state is Works For Me. There is also competition from the Atomic Chrome extension (and it’s related emacs extension). It’s an excellent package and has the advantage of a Chrome extension that is more actively developed and using a bi-directional web-socket to communicate with the edit server. It’s been a feature I’ve wanted to add to Edit with Emacs for a while but my re-factoring efforts are slowed down by the fact that Javascript is not a language I’m fluent in and finding a long enough period of spare time is hard with a family. I guess this is a roundabout way of saying that realistically this package is in maintenance mode and you shouldn’t expect to see any new development for the time being. I’ll of course try my best to address reproducible bugs and process pull requests in a timely manner. That said please enjoy v1.15:

Extension

* Now builds for Firefox using WebExtension hooks
* Use chrome.notifications instead of webkitNotifications
* Use

with style instead of inline for edit button
* fake “input” event to stop active page components overwriting text area

edit-server.el

* avoid calling make-frame-on-display for TTY setups (#103/#132/#133)
* restore edit-server-default-major-mode if auto-mode lookup fails
* delete window when done editing with no new frame

Get the latest from the Chrome Webstore.

by Alex at January 01, 2018 16:47

January 06, 2018

Bin Chen

Blockchain


To understand what Blockchain is, we need to go a little bit low level and understand what is Transactionand Block.
Transaction is also called record. It maps to a real-life event. Such as Rob pay Lucy $100; or Bin pays $100 for Roger Waters’ tour in Sydney 2018 (Yes, that’s true.)
By contrast, Block and Blockchain are abstract entities that are used to making sure all the Transaction happened will be recorded permanently and once it is recorded it is trustworthy and unmutable, but without a centralized authority says so. Blockchain is, well, a chain of Blocks.
The decentralized trust is the beauty and value of blockchain. And it’s power and usefulness is manifested by the success of the application out of which it is invented - BitCoin.
With the power be decentralized, you don’t need to hand over your power and privacy to others in exchange for a service. If there is a distributed social network platform, probably you want to give it a try if you are concerned about your privacy with Facebook? Or people, out of various reasons, don’t want to go through a bank, Paypal, WeChat for their financial transactions, that’s one of the primary reasons that drive the rise of BitCoin.

Distributed Consensus

For such as a system to work, the center of the problems is: How peers in a distributed system can agree on something?
“They can vote!” I hear you screaming. Yes…but if I can control 51% of the machines, I can control the whole system, for my benefit. It is a lot easier than controlling 51% people, all you need is money. If the economic incentive to do so out-weights the cost, people will do it! Mind you that currently BitCoin worth $100Bn.
BitCoin didn’t solve the general problem of distributed consensus, but it provides a solution that works extremely well in practice, using things called proof-of-work, and confirmation. It is not only about technology, but also some genius social innovation to ensure the whole system works.

Other Problems

In the context of Bitcoin, where it’s all about money. Other than making sure there will be a consensus regarding what are the valid transactions, following things are also very important design goals. If you can’t get them working, the whole system won’t work.
  1. A can’t spend B’s coins.
  2. A can’t spend more than she had.
  3. A can’t double-spend her coins.

Solutions

Goal #1 is guaranteed by the cryptography.
A will need to sign the transaction using her private key. As long as A doesn’t have B’s private key, he won’t be able to spend B’s money.
A big shout here: take good care of your private key! Otherwise, you will lose all your money!
Goal #2 is guaranteed by having immutable transaction history, thus it is easy to verify if A is over-spend her coins: just look up the unique global blockchain find out how much she remains.
The immutability of blockchain is achieved by having the newer blocks including the digest of old blocks, so to modify a block you will also require modifying all the blocks that come after it. The computation power requires to make such modification is so huge (we’ll discuss this in detail in proof-of-work, and mining) that outweighs the benefits you might get so in practices make nobody will do it. Therefore, we consider the blockchain used by BitCoin is safe and can be trusted.
For Goal #3, we need a little bit explanation of what double-spendis since that’s one of the core problems for any digital currency to be useful. When using paper money, there won’t be any issue of double-spending. You hand out your money and you get the good. You can use that same money again. In case of Bitcoin, due to its distributed nature, it will take a while (say 1 hour) for the transaction to be made to the final blockchain, or to be confirmed. During that period, if A start another transaction, using the same coin he has just spent but is now in a to-be-confirmed status, he is double-spending the coin. Due to the distributed nature of the system, there is no easy way to arbitrate which transaction comes first, since there is not a global timing used by each transaction. This is called Race Attack, one of the ways to double-spend. Bitcoin solves the problem by something called confirmation. Newly created transaction has zero confirmation; It gets one confirmation when it is included in a Block and be chained; another confirmation when a new block appended; the more confirmations you get, the more confidence you get that transaction will make into the blockchain and won’t be reversed (thus double-spend). Currently, 6 confirmations are what most people will require.

Journey of a Transaction in BitCoin system

In previous sections, we go through some problems a distributed ledge system have to solve and touch gently how BitCoin solve the problems from a high level. Here we go a little bit low level, walking you through a journey the life cycle of a transaction in BitCoin system.
  1. BitCoin runs in a distributed peer-to-peer network. Everyone equals. We will call each peer aNode.
  2. A Node will create a Transaction and propagate it to the network.
  3. That transaction will be picked up by a few other nodes, be validated by them, and it is considered to be a valid one, it will be put into a Block. Note that there might be more than one Block being created that contains that transaction.
  4. At a certain time, a random node get picked up and asked to propose a candidate as next Block to be added to the global blockchain.
  5. The proposed block will be validated by other nodes, and it can be accepted or rejected by other Node. When being accepted, the Node accepts it will add it to its current longest blockchain; When being rejected, it just gets ignored.
  6. If enough nodes agree the proposed block is a valid one, it will be added to the global unique the blockchain; and all the nodes need to sync up their local blockchain with the global one.
  7. Now, the transaction created in 2 will be stored in the global blockchain forever and be trusted by all that it is a valid transaction. No dispute.
This is basically the protocol used by BitCoin to propagate & validate the transaction, arriving at a consensus if the transaction is valid or not, in a fully distributed peer-to-peer system.

Next

We’ll look at in the details in next article some of the concepts we touched lightly such as proof-of-workand what is mining really. It will be more technically focused.
Stay tuned.

by Bin Chen (noreply@blogger.com) at January 01, 2018 05:32

December 30, 2017

Gema Gomez

Add new ball for knitting

I knit less than I crochet, and this means that I forget all the basic things from time to time. Up until now, I had never had to join a new ball of yarn to a project, because my projects were small and used just one skein.

After some research, I have found this video quite clear on how to add a new ball of yarn safely:

Instructions

  1. In the middle of a row, insert the needle as if getting ready to knit a stitch normally.
  2. Instead of using the old yarn end, create a loop with the new one, and finish the stitch with it.
  3. Loop the old end of yarn over the top of the two new ones, this prevents a hole from forming.
  4. Holding both strands of the new ball of yarn do three or four more regular stitches to secure everything.
  5. Drop the short end from the new ball and just pick up the long strand and continue as normal.

Note: be careful on the way back not to work increases on the stitches that have been knitted with two strands, work them together. If the loose ends loosen up whilst you are working, give them little tugs, then weave them in.

by Gema Gomez at December 12, 2017 00:00

December 29, 2017

Gema Gomez

Autumn Knitting and Stiching Show 2017

This year, once again I took a day off during October and headed to Alexandra Palace in London to enjoy a day off looking at knitting/sewing supplies and ideas. This year’s Autumn Knitting and Stitching Show has been as interesting as always. I started the day doing some fabrics shopping (everything was so colorful):

sewing

Then, inevitably, admired all the art that was on display at the show. This time I was quite surprised by two scenes made of yarn, a railway station and a church. Here is proof that it can be knitted and it can look gorgeous:

railway station church

Awesome day out, as always with the Knitting and Stitching Show, cannot wait to see what things are there next year!

by Gema Gomez at December 12, 2017 00:00

December 02, 2017

Bin Chen

AWS Services Vs OpenStack

AWS has numerous services and it’s easy to get lost for beginners regarding what is for what. Meanwhile, as an open source advocator, I’m always interested to know what are the open source alternatives. To be fair, without open source code, none of existing cloud computing and big data platform would even exist.
Hence, I come up with the following table categorizing the key AWS services, each with a one-line interpretation; In addition, it also shows its corresponding OpenStack component, if there is one. Hopeful it’s helpful for you when either wandering through the AWS services or OpenStack one.

As you probably have noticed, AWS has much more services than OpenStackcan offer. That’s true.
OpenStack is more an Infrastructure As A Service(IaaS) solution, while AWS offers the solutions for all the other Xass - you name it, they have it: PaaSCaaSFaaS. And actually it is not just a “MeToo” solution, AWS actually leading the trend in some cases, such as Lambda, which is an offering for FaaS(Function As A service), or serverless, if you like. We might have more comparison regarding the open source solutions and AWS on those areas in the future, but this table primary compares the OpenStack and AWS.
WhatFor WhatOpenStack
compute
EC2ComputerNova
ELBLoad Balancer
AutoScalingAuto Scaling
ECSContainer, Docker based
LambdaFunction/runtime cloud
storage
S3Object storageSwift
EBSBlock storageCinder
EFSNetwork filesystem service used by EC2
GlacierData archive/backup
network
VPCVirtual Private Cloud
Route 53DNS Service & RoutingNeutron
CloudFrontCDN
database
RDSRelation Database ServerTrove
AuroraAmazon’s managed RDS
DynamoDBNoSQL data store
ElastiCachein-memory cache use redis,memcached
RedshiftData warehouse
analytics
AthenaAnalysis by sql
KinesisStream Analysis
EMRHadoop/Spark on AWSSahara
IOT
AWS IoTIoT Devices, Mqtt broker
GreengrassIoT Gateway ,Lambda on Gateway
AI
LexSpeech to text & NLP/NLU, think Alexa
PollyText to speech
RekognitionImage Analysis
MLClassification and prediction
mobile
MobileSDKaccess/use AWS services on mobile
Device Farmapp test on devices
Appl. Svs
API GatewayREST API to access AWS services
SQSMessage QueueZaqar
SNSNotification Service
secu/Id
IAMidentity&access controlKeystone
Tools
CloudFmtionService OrchestrationHEAT
CloudWatchAWS resource monitor
CloudTrailAWS API call log
AdvisorAWS best practise Advisor

by Bin Chen (noreply@blogger.com) at December 12, 2017 10:52

October 14, 2017

Gema Gomez

ImagiKnit

A couple of weeks ago I was in San Francisco for work. This was not my first time in San Francisco, so I didn’t really have a very packed agenda. Since it was Sunday, went out with a couple of colleagues, we stopped at Presidio for picnic and ate some amazing food from the lovely food trucks there (Off the grid). Afterwards we headed to what would be a very amazing visit to a yarn shop abroad. Imagiknit:

Imagiknit shop

I had never heard of it before one of my friends at work mentioned it a couple of weeks prior to our trip. The shop was a delight: spacious and a nice atmosphere, welcoming. A lot of different brands of yarn. Lots of ideas hanging near the different brands of yarn.

Inside the shop

It took us a while to do our shopping, there was a lot of wall space to cover and we wanted to make sure to get enough yarn to have something to remember this little corner of the world by. The shop keepers were knowledgeable and helpful, they got me some of the colors I needed and were not on display. They were kind and also gave me advice on some of the patterns I was interested in, found the books I was looking for. They did not only have yarn, they had plenty of accessories and books to choose from too.

Inside the shop

And this is what my shopping looked like when I arrived to the hotel:

Shopping

ImagiKnit has become a new must go place for me whenever I go next to San Francisco. Totally worth a couple of hours if you are ever visiting the city and are into knitting or crochet.

by Gema Gomez at October 10, 2017 23:00

October 01, 2017

Bin Chen

AWS IoT Pipelines In Action: IoT Core, Lambda, Kenisis, Analytics


Today we will show you two end to end pipeline using AWS IoT Core and other AWS services.
  1. devices publish their status to the cloud, and cloud will process the events and write abnormal status to a noSQL database.
  2. device publish their status to the cloud, and we'll do real-time stream analytics on the events using Kenisis Analytics.

Pipeline 1 : Process data using Lambda

    +-----------+      +----------+     +------------+    +------------+
| | | Message | | | | |
| IoT Device| | Broker | | Rules | | Kenisis |
| +----> | +---> | Engine +--> | |
| | | | | | | |
+-----------+ +----------+ +------------+ +-----+------+
|
v
+------------+ +------------+
| DynamoDB | | |
| | ---+ Lambda |
| | | |
+------------+ +------------+
IoT Devices publish its status to the Message Broker, which is one components of AWS IoT core, using MQTT. Rules Engine (again one components of AWS IoT) is set up to channel the message to a Kenisis stream, which is setup as the trigger event to the Lambda function. The Lambda function is where the process happens, or business logic, if you like.

Lambda

The lambda function will take records (Kenisis stream's terminology) from the Kenisis stream, i.e it's trigger event, then filter, process, store, and pass to other stage.
Check the code section below for details.

Pipeline 2 : Real-time analytics using Kenisis Analytics

    +-----------+      +----------+     +------------+    +------------+
| | | Message | | | | |
| IoT Device| | Broker | | Rules | | Kenisis |
| +----> | +---> | Engine +--> | |
| | | | | | | |
+-----------+ +----------+ +------------+ +-----+------+
|
v
+------------+ +------------+
| Output | | Kenisis |
| |----+ Analytics |
| | | |
+------------+ +------------+
The first four stages of the pipeline is the same as pipeline 1. But here we channel the Kenisis stream to the Kenisis Analytics to do real time analysis. If you know about Hadoop/Spark ecosystem, Kenisis Analytics is equivalent to Spark Stream.

Kenisis Analytics

You can use a syntax similar SQL to analysis the stream data over a window period.
  • An example: filter data
CREATE OR REPLACE STREAM "DESTINATION_SQL_STREAM" (TEMP INT, EVENTID INT);
CREATE OR REPLACE PUMP "STREAM_PUMP" AS INSERT INTO "DESTINATION_SQL_STREAM"
SELECT STREAM TEMP, EVENTID
FROM "SOURCE_SQL_STREAM_001"
WHERE EVENTID = 2;
  • result
You'll see new events being added to the result as time goes by. Note that the time stamp is added automatically by Kenisis Analytics.
2017-10-01 03:24:37.978        50      2
2017-10-01 03:24:57.904 50 2
2017-10-01 03:25:05.914 50 2
2017-10-01 03:25:25.978 50 2
2017-10-01 03:25:44.001 50 2
2017-10-01 03:26:02.005 50 2
2017-10-01 03:26:11.898 50 2
2017-10-01 03:26:19.947 50 2
2017-10-01 03:26:29.922 50 2
2017-10-01 03:26:39.973 50 2

code

I'm not going to show you each and every steps of creating an IoT device, setting the Rules Engine, creating Kenisis and connecting those components to create a pipeline.
What I will show you are:
  1. A device simulator that can be used to drive the whole pipeline. You can easily spin up multiply devices and send message to simulate real user cases.
  2. The complete lambda handler that will parse the Kenisis record, filter the data, and write to a DynamoDB.
  3. A simple Kenisis Analytics SQL that used to filter out abnormal events and generate live report.

IoT device simulator

Launch one or more simulated devices and pushing data to a topic. It is used to drive the whole pipeline.
Basic usage: ./pub.sh deviceId. It will start a device as ${deviceId} and publish events declared in the file simulated_events.json in a loop fashion. To quit use ctr+c.
Or, use ./start_ants.sh to launch several devices in the background that will continuously publishing the events.
#devicueSimulator.py
from AWSIoTPythonSDK.MQTTLib import AWSIoTMQTTClient
import argparse
import json
import logging
import time
import signal

AllowedActions = ['publish', 'subscribe', 'both']

def subscribe_callback(client, userdata, message):
print("[<< Receive]: ", "topic", message.topic, message.payload)


def args_parser():
parser = argparse.ArgumentParser()
parser.add_argument("-e", "--endpoint", action="store", required=True,
dest="host", help="Your AWS IoT custom endpoint")
parser.add_argument("-r", "--rootCA", action="store", required=True,
dest="rootCAPath", help="Root CA file path")
parser.add_argument("-c", "--cert", action="store", dest="certificatePath",
help="Certificate file path")
parser.add_argument("-k", "--key", action="store", dest="privateKeyPath",
help="Private key file path")
parser.add_argument("-w", "--websocket", action="store_true",
dest="useWebsocket", default=False,
help="Use MQTT over WebSocket")
parser.add_argument("-id", "--clientId", action="store", dest="clientId",
default="basicPubSub",
help="Targeted client id")
parser.add_argument("-t", "--topic", action="store", dest="topic",
default="sensors", help="topic prefix")
parser.add_argument("-d", "--deviceId", action="store", dest="deviceId",
required=True,
help="device serial number, used as last part of "
"topic")
parser.add_argument("-m", "--mode", action="store", dest="mode",
default="publish",
help="Operation modes: %s" % str(AllowedActions))

args = parser.parse_args()

return parser, args

def config_logger():
logger = logging.getLogger("AWSIoTPythonSDK.core")
logger.setLevel(logging.ERROR)
streamHandler = logging.StreamHandler()
formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s')
streamHandler.setFormatter(formatter)
logger.addHandler(streamHandler)


def main():
parser, args = args_parser()

config_logger()
mqtt_client = create_mqtt_client(args)
mqtt_client.connect()

topic = args.topic + "/" + args.deviceId
print("topic:", topic)

if args.mode == 'subscribe' or args.mode == 'both':
mqtt_client.subscribe(topic, 1, subscribe_callback)

if args.mode == 'subscribe':
print("endless wait.. ctrl+c to finish");
signal.pause()

# wait a while to make sure subscribe take effect
time.sleep(2)

if args.mode == 'publish' or args.mode == 'both':
publish_events(topic, args.deviceId, mqtt_client)


def publish_events(topic, deviceId, mqtt_client):
if len(topic) == 0:
print("topic can't be empty")
exit(2)

with open('simulated_events.json') as f:
events = json.load(f)

while True:
for event in events:
# override the deviceId using pass-in value
event['deviceId'] = deviceId
message = json.dumps(event)
mqtt_client.publish(topic, message, 1)
print('[>> publish]', 'topic', topic, message)
time.sleep(2)

def create_mqtt_client(args):
host = args.host
rootCAPath = args.rootCAPath
certificatePath = args.certificatePath
privateKeyPath = args.privateKeyPath
useWebsocket = args.useWebsocket
clientId = args.clientId

mqttclient = None
if useWebsocket:
mqttclient = AWSIoTMQTTClient(clientId, useWebsocket=True)
mqttclient.configureEndpoint(host, 443)
mqttclient.configureCredentials(rootCAPath)
else:
mqttclient = AWSIoTMQTTClient(clientId)
mqttclient.configureEndpoint(host, 8883)
mqttclient.configureCredentials(rootCAPath, privateKeyPath,
certificatePath)

# AWSIoTMQTTClient connection configuration
mqttclient.configureAutoReconnectBackoffTime(1, 32, 20)
mqttclient.configureOfflinePublishQueueing(-1)
mqttclient.configureDrainingFrequency(2)
mqttclient.configureConnectDisconnectTimeout(10)
mqttclient.configureMQTTOperationTimeout(5)

return mqttclient


if __name__ == "__main__":
main()
#pub.sh
endpoint=your_things_endpoint

python deviceSimulator.py \
-e ${endpoint} \
-r root-CA.crt \
-c device.cert.pem \
-k device.private.key \
--topic 'sensors' \
--deviceId $1

Lambda Handler

In the handler, we filter all the records whose temperature > 50, and write it DynamoDB table - warning_events.
import base64
import boto3
import json

def lambda_handler(event, context):
for record in event['Records']:
event = decode_kenisis_data(record["kinesis"]["data"])
if filter_event(event):
print("warning: temperature too high, write it to dynamoDB")
event = process_event(event)
add_event(event)

return "ok"


def filter_event(event):
return event['temperature'] > 30


def process_event(event):
return event


def decode_kenisis_data(data):
"""create event from base64 encoded string"""
try:
return json.loads(base64.b64decode(data.encode()).decode())
except Exception:
print("malformed event data")
return None


def encode_kenisis_data(event):
"""encode event to base64"""
try:
return base64.b64encode(json.dumps(event).encode()).decode()
except Exception:
return None

# dynamoDB table handling
dynamodb = boto3.resource('dynamodb')
dbclient = boto3.client('dynamodb')

def is_table_exist(name):
# Not strong enough, a table might not full ready or in destroy status
# just loose the constraint for now
return True if name in dbclient.list_tables()['TableNames'] else False


def get_or_create_table(name):
if not is_table_exist(name):
print("table", name, " doesn't exist, go and create one")
return create_table(name)

return dynamodb.Table(name)


def create_table(table_name):
table = dynamodb.create_table(
TableName=table_name,
KeySchema=[
{
'AttributeName': 'deviceId',
'KeyType': 'HASH'
}
],

AttributeDefinitions=[
{
'AttributeName': 'deviceId',
'AttributeType': 'S'
}
],
ProvisionedThroughput={
'ReadCapacityUnits': 5,
'WriteCapacityUnits': 5
}
)

# Wait until the table exists.
table.meta.client.get_waiter('table_exists').wait(TableName=table_name)
return table


def add_event(event):
get_or_create_table('warning_events').put_item(Item=event)


def delete_table(name):
dynamodb.Table(name).delete()

Tips

  • debug message broker
Use pub.sh, open the script file and add --mode 'both'. It will receive the message when publishing it.
You can also use the Test Page[1] in the IoT console. You can subscribe/publish to a topic. You can publish using ./pub.sh and subscribe using the webpage. It is more "real" than using --mode both.
One thing worth noting the namespace for topic is implicit: /account/region/topic. It means your topic won't collide with my topic. And even for my account, topic `sensors/001' in us-east-1 region are different the one in ap-southeast-2.
  • debug rules engine
Rules engine connect the message broker and other AWS services, for example, Kenisis, lambda.
Seems there is nothing too much to debug, it is kind of black-box AFAIK. It is suggested to use the Cloud Watch to debug it, but I wasn't lucky enough to get it working. A good idea maybe before debugging the connector, making sure the relevant source/sink are working independently. Say, in the case connect message broker to the Kenisis stream. Make sure we unit tested the topic/message broker and the Kenisis stream first.
Hint: If you run out of ideas, try re-create the rules. And it is what I did and the result is very good :).
  • debug Kenisis
Use get_stream_record.sh to check if there is any records in the stream.
region=us-east-1
stream_name=us-east-sensors-kinesis

SHARD_ITERATOR=$(aws kinesis get-shard-iterator \
--shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON \
--region ${region} \
--stream-name ${stream_name} \
--query 'ShardIterator')
aws kinesis get-records --region ${region} --shard-iterator ${SHARD_ITERATOR}
If not, something wrong is with the stream publish.
Use aws kinesis put-record to publish a record manually and try to issue get_stream_record.shagain to see if anything changes. If return records now, it means something wrong with the pipeline - no one is produce the record.

Summary

We show cases two pipelines/architectures, which can be used to implement two common user case. Actually, you don't have to use one pipeline over the other. In practice, the two pipeline are usually combined. It can be easily be implemented by creating multiple Rules Engines with two output Kenisis Streams, one for Lambda processing and one for analytics.
The beauty of this architectures are:
  1. Very scalable. You can have thousands of devices connected at the same time without worry about the throughput. 
  2. High available. Components such as DynamoDB and the Lambda are built on top of AWS high available services.
  3. Server-less. No single machine, even virtual one, you have to setup and maintain. Agility improves.

by Bin Chen (noreply@blogger.com) at October 10, 2017 07:53

September 16, 2017

Bin Chen

Android Things on RPi3 - 2


This is a follow up to our last report on Android things status on RPi3. Compared with last report, the biggest difference is Andriod 5.1 is running with Android O. We'll go through the services list, system processes, properties, features, dispaly, cameara, wifi, bluetooth, sensors, and a little bit Android O Treble.
Flash, boot up and connect. Wait a little while before you using adb connect.
  • rpi3:/ $ service list
    Found 122 services:
    0 devicemanagementservice: [com.google.android.things.internal.devicemanagement.IDeviceManagementService]
    1 gpsdriverservice: [com.google.android.things.userdriver.IGpsDriverService]
    2 contexthub: [android.hardware.location.IContextHubService]
    3 netd_listener: [android.net.metrics.INetdEventListener]
    4 connmetrics: [android.net.IIpConnectivityMetrics]
    5 bluetooth_manager: [android.bluetooth.IBluetoothManager]
    6 imms: [com.android.internal.telephony.IMms]
    7 iotsystemservice: []
    8 media_projection: [android.media.projection.IMediaProjectionManager]
    9 launcherapps: [android.content.pm.ILauncherApps]
    10 shortcut: [android.content.pm.IShortcutService]
    11 media_router: [android.media.IMediaRouterService]
    12 media_session: [android.media.session.ISessionManager]
    13 restrictions: [android.content.IRestrictionsManager]
    14 graphicsstats: [android.view.IGraphicsStats]
    15 dreams: [android.service.dreams.IDreamManager]
    16 commontime_management: []
    17 network_time_update_service: []
    18 samplingprofiler: []
    19 diskstats: []
    20 trust: [android.app.trust.ITrustManager]
    21 soundtrigger: [com.android.internal.app.ISoundTriggerService]
    22 jobscheduler: [android.app.job.IJobScheduler]
    23 hardware_properties: [android.os.IHardwarePropertiesManager]
    24 serial: [android.hardware.ISerialManager]
    25 usb: [android.hardware.usb.IUsbManager]
    26 DockObserver: []
    27 audio: [android.media.IAudioService]
    28 search: [android.app.ISearchManager]
    29 country_detector: [android.location.ICountryDetector]
    30 location: [android.location.ILocationManager]
    31 devicestoragemonitor: []
    32 notification: [android.app.INotificationManager]
    33 updatelock: [android.os.IUpdateLock]
    34 servicediscovery: [android.net.nsd.INsdManager]
    35 connectivity: [android.net.IConnectivityManager]
    36 ethernet: [android.net.IEthernetManager]
    37 rttmanager: [android.net.wifi.IRttManager]
    38 wifiscanner: [android.net.wifi.IWifiScanner]
    39 wifi: [android.net.wifi.IWifiManager]
    40 netpolicy: [android.net.INetworkPolicyManager]
    41 netstats: [android.net.INetworkStatsService]
    42 network_score: [android.net.INetworkScoreService]
    43 textservices: [com.android.internal.textservice.ITextServicesManager]
    44 network_management: [android.os.INetworkManagementService]
    45 clipboard: [android.content.IClipboard]
    46 statusbar: [com.android.internal.statusbar.IStatusBarService]
    47 device_policy: [android.app.admin.IDevicePolicyManager]
    48 deviceidle: [android.os.IDeviceIdleController]
    49 lock_settings: [com.android.internal.widget.ILockSettings]
    50 uimode: [android.app.IUiModeManager]
    51 storagestats: [android.app.usage.IStorageStatsManager]
    52 mount: [android.os.storage.IStorageManager]
    53 accessibility: [android.view.accessibility.IAccessibilityManager]
    54 input_method: [com.android.internal.view.IInputMethodManager]
    55 pinner: []
    56 vrmanager: [android.service.vr.IVrManager]
    57 input: [android.hardware.input.IInputManager]
    58 window: [android.view.IWindowManager]
    59 alarm: [android.app.IAlarmManager]
    60 consumer_ir: [android.hardware.IConsumerIrService]
    61 vibrator: [android.os.IVibratorService]
    62 settings: []
    63 content: [android.content.IContentService]
    64 account: [android.accounts.IAccountManager]
    65 media.camera.proxy: [android.hardware.ICameraServiceProxy]
    66 telephony.registry: [com.android.internal.telephony.ITelephonyRegistry]
    67 scheduling_policy: [android.os.ISchedulingPolicyService]
    68 sec_key_att_app_id_provider: [android.security.keymaster.IKeyAttestationApplicationIdProvider]
    69 webviewupdate: [android.webkit.IWebViewUpdateService]
    70 overlay: [android.content.om.IOverlayManager]
    71 usagestats: [android.app.usage.IUsageStatsManager]
    72 battery: []
    73 sensorservice: [android.gui.SensorServer]
    74 dropbox: [com.android.internal.os.IDropBoxManagerService]
    75 processinfo: [android.os.IProcessInfoService]
    76 permission: [android.os.IPermissionController]
    77 cpuinfo: []
    78 dbinfo: []
    79 gfxinfo: []
    80 meminfo: []
    81 procstats: [com.android.internal.app.procstats.IProcessStats]
    82 activity: [android.app.IActivityManager]
    83 user: [android.os.IUserManager]
    84 otadexopt: [android.content.pm.IOtaDexopt]
    85 package: [android.content.pm.IPackageManager]
    86 display: [android.hardware.display.IDisplayManager]
    87 recovery: [android.os.IRecoverySystem]
    88 power: [android.os.IPowerManager]
    89 appops: [com.android.internal.app.IAppOpsService]
    90 batterystats: [com.android.internal.app.IBatteryStats]
    91 device_identifiers: [android.os.IDeviceIdentifiersPolicyService]
    92 com.google.android.things.pio.IPeripheralManager: [com.google.android.things.pio.IPeripheralManager]
    93 media.sound_trigger_hw: [android.hardware.ISoundTriggerHwService]
    94 media.radio: [android.hardware.IRadioService]
    95 media.aaudio: [IAAudioService]
    96 media.audio_policy: [android.media.IAudioPolicyService]
    97 audiodriverservice: [com.google.android.things.userdriver.IAudioDriverService]
    98 media.extractor: [android.media.IMediaExtractorService]
    99 media.resource_manager: [android.media.IResourceManagerService]
    100 media.player: [android.media.IMediaPlayerService]
    101 media.audio_flinger: [android.media.IAudioFlinger]
    102 gpu: [android.ui.IGpuService]
    103 SurfaceFlinger: [android.ui.ISurfaceComposer]
    104 media.camera: [android.hardware.ICameraService]
    105 drm.drmManager: [drm.IDrmManagerService]
    106 media.metrics: [android.media.IMediaAnalyticsService]
    107 media.codec: [android.media.IMediaCodecService]
    108 android.brillo.UpdateEngineService: [android.brillo.IUpdateEngine]
    109 android.brillo.metrics.IMetricsCollectorService: [android.brillo.metrics.IMetricsCollectorService]
    110 media.cas: [android.media.IMediaCasService]
    111 media.drm: [android.media.IMediaDrmService]
    112 android.brillo.metrics.IMetricsd: [android.brillo.metrics.IMetricsd]
    113 inputdriverservice: [com.google.android.things.userdriver.IInputDriverService]
    114 netd: [android.net.INetd]
    115 sensordriverservice: [com.google.android.things.userdriver.ISensorDriverService]
    116 android.security.keystore: [android.security.IKeystoreService]
    117 wificond: [android.net.wifi.IWificond]
    118 android.service.gatekeeper.IGateKeeperService: [android.service.gatekeeper.IGateKeeperService]
    119 storaged: [Storaged]
    120 installd: [android.os.IInstalld]
    121 batteryproperties: [android.os.IBatteryPropertiesRegistrar]
rpi3:/ $ service list | grep things
0   devicemanagementservice: [com.google.android.things.internal.devicemanagement.IDeviceManagementService]
1 gpsdriverservice: [com.google.android.things.userdriver.IGpsDriverService]
92 com.google.android.things.pio.IPeripheralManager: [com.google.android.things.pio.IPeripheralManager]
97 audiodriverservice: [com.google.android.things.userdriver.IAudioDriverService]
113 inputdriverservice: [com.google.android.things.userdriver.IInputDriverService]
115 sensordriverservice: [com.google.android.things.userdriver.ISensorDriverService]
  • rpi3:/ $ ps -ef
    UID            PID  PPID C      TIME CMD
    root 1 0 4 init
    root 2 0 0 [kthreadd]
    root 3 2 0 [ksoftirqd/0]
    root 4 2 0 [kworker/0:0]
    root 5 2 0 [kworker/0:0H]
    root 6 2 0 [kworker/u8:0]
    root 7 2 0 [rcu_preempt]
    root 8 2 0 [rcu_sched]
    root 9 2 0 [rcu_bh]
    root 10 2 0 [migration/0]
    root 11 2 0 [migration/1]
    root 12 2 0 [ksoftirqd/1]
    root 13 2 0 [kworker/1:0]
    root 14 2 0 [kworker/1:0H]
    root 15 2 0 [migration/2]
    root 16 2 0 [ksoftirqd/2]
    root 17 2 0 [kworker/2:0]
    root 18 2 0 [kworker/2:0H]
    root 19 2 0 [migration/3]
    root 20 2 0 [ksoftirqd/3]
    root 21 2 0 [kworker/3:0]
    root 22 2 0 [kworker/3:0H]
    root 23 2 0 [kdevtmpfs]
    root 24 2 0 [netns]
    root 25 2 0 [perf]
    root 26 2 0 [khungtaskd]
    root 27 2 0 [writeback]
    root 28 2 0 [ksmd]
    root 29 2 0 [crypto]
    root 30 2 0 [bioset]
    root 31 2 0 [kblockd]
    root 32 2 0 [kworker/0:1]
    root 33 2 0 [cfg80211]
    root 34 2 0 [rpciod]
    root 35 2 1 [kswapd0]
    root 36 2 0 [vmstat]
    root 37 2 0 [fsnotify_mark]
    root 38 2 0 [nfsiod]
    root 64 2 0 [kthrotld]
    root 65 2 0 [kworker/1:1]
    root 66 2 0 [kworker/2:1]
    root 67 2 0 [bioset]
    root 68 2 0 [bioset]
    root 69 2 0 [bioset]
    root 70 2 0 [bioset]
    root 71 2 0 [bioset]
    root 72 2 0 [bioset]
    root 73 2 0 [bioset]
    root 74 2 0 [bioset]
    root 75 2 0 [bioset]
    root 76 2 0 [bioset]
    root 77 2 0 [bioset]
    root 78 2 0 [bioset]
    root 79 2 0 [bioset]
    root 80 2 0 [bioset]
    root 81 2 0 [bioset]
    root 82 2 0 [bioset]
    root 83 2 0 [bioset]
    root 84 2 0 [bioset]
    root 85 2 0 [bioset]
    root 86 2 0 [bioset]
    root 87 2 0 [bioset]
    root 88 2 0 [bioset]
    root 89 2 0 [bioset]
    root 90 2 0 [bioset]
    root 91 2 0 [VCHIQ-0]
    root 92 2 0 [VCHIQr-0]
    root 93 2 0 [VCHIQs-0]
    root 94 2 0 [iscsi_eh]
    root 96 2 0 [dwc_otg]
    root 97 2 0 [DWC Notificatio]
    root 98 2 0 [VCHIQka-0]
    root 99 2 0 [dm_bufio_cache]
    root 100 2 0 [kworker/u8:1]
    root 101 2 0 [irq/92-mmc1]
    root 102 2 0 [bioset]
    root 103 2 1 [mmcqd/0]
    root 104 2 0 [kworker/0:2]
    root 105 2 0 [binder]
    root 106 2 0 [kworker/u8:2]
    root 107 2 0 [kworker/u8:3]
    root 108 2 0 [kworker/u8:4]
    root 109 2 0 [brcmf_wq/mmc1:0]
    root 110 2 0 [brcmf_wdog/mmc1]
    root 111 2 0 [ipv6_addrconf]
    root 112 2 0 [SMIO]
    root 113 2 0 [deferwq]
    root 114 2 0 [kworker/2:2]
    root 115 2 0 [jbd2/mmcblk0p6-]
    root 116 2 0 [ext4-rsv-conver]
    root 117 2 0 [kworker/3:1]
    root 118 1 0 ueventd
    root 123 2 0 [jbd2/mmcblk0p15]
    root 124 2 0 [ext4-rsv-conver]
    root 125 2 0 [ext4-rsv-conver]
    root 126 2 0 [jbd2/mmcblk0p13]
    root 127 2 0 [ext4-rsv-conver]
    logd 128 1 0 logd
    system 129 1 0 servicemanager
    system 130 1 0 hwservicemanager
    system 131 1 0 vndservicemanager /dev/vndbinder
    root 136 2 0 [kauditd]
    root 140 1 0 android.hardware.boot@1.0-service
    system 141 1 0 android.hardware.keymaster@3.0-service
    root 142 1 0 vold --blkid_context=u:r:blkid:s0 --blkid_untrusted_context=u:r:blkid_untrusted:s0 --fsck_context=u:r:fsck:s0 --fsck_untrusted_context=u:r:fsck_untrusted:s0
    root 151 1 0 netd
    root 152 1 2 zygote
    root 154 151 0 iptables-restore --noflush -w -v
    root 155 151 0 ip6tables-restore --noflush -w -v
    system 156 1 0 android.hidl.allocator@1.0-service
    audioserver 157 1 0 android.hardware.audio@2.0-service
    bluetooth 158 1 0 android.hardware.bluetooth@1.0-service
    system 159 1 0 android.hardware.configstore@1.0-service
    system 160 1 0 android.hardware.graphics.allocator@2.0-service
    system 161 1 0 android.hardware.graphics.composer@2.1-service
    system 162 1 0 android.hardware.power@1.0-service
    system 163 1 0 android.hardware.sensors@1.0-service
    system 164 1 0 android.hardware.usb@1.0-service
    wifi 165 1 0 android.hardware.wifi@1.0-service
    root 166 1 0 healthd
    root 167 1 0 lmkd
    system 168 1 2 surfaceflinger
    shell 169 1 0 sh
    shell 170 1 0 adbd --root_seclabel=u:r:su:s0
    audioserver 171 1 1 audioserver
    cameraserver 172 1 0 cameraserver
    drm 173 1 0 drmserver
    root 174 1 0 installd
    keystore 175 1 0 keystore /data/misc/keystore
    media 177 1 0 mediadrmserver
    mediaex 178 1 0 media.extractor aextractor
    media 179 1 0 media.metrics diametrics
    media 180 1 0 mediaserver
    root 181 1 1 peripheralman
    root 182 1 0 storaged
    wifi 183 1 0 wificond
    mediacodec 184 1 0 media.codec hw/android.hardware.media.omx@1.0-service
    root 186 1 0 sh /system/bin/periodic_scheduler 3600 14400 crash_sender /system/bin/crash_sender
    system 187 1 0 gatekeeperd /data/misc/gatekeeper
    system 188 1 0 userinputdriverservice
    metrics_coll 189 1 0 metrics_collector --foreground --logtosyslog
    metricsd 190 1 0 metricsd --foreground --logtosyslog
    tombstoned 192 1 0 tombstoned
    root 193 1 0 update_engine --logtostderr --foreground
    root 216 186 0 sleep 310
    mdnsr 222 1 0 mdnsd
    root 253 2 0 [kworker/1:2]
    system 310 152 12 system_server
    root 349 2 0 [kworker/0:1H]
    root 351 2 0 [kworker/1:1H]
    u0_a28 424 152 0 com.android.inputmethod.latin
    u0_a8 431 152 0 com.android.iot.systemui
    media_rw 436 142 0 sdcard -u 1023 -g 1023 -m -w /data/media emulated
    webview_zygote 467 1 1 webview_zygote32
    system 497 152 0 com.android.settings
    root 551 2 0 [kworker/3:2]
    system 592 152 0 com.google.android.things.internal.devicemanagement
    u0_a7 623 152 2 android.process.media
    system 637 152 1 com.android.iotlauncher
    u0_a11 669 152 1 com.google.android.gms.feedback
    u0_a10 695 152 0 com.android.managedprovisioning
    u0_a9 718 152 0 com.android.onetimeinitializer
    u0_a16 736 152 0 com.android.packageinstaller
    system 748 152 0 com.android.keychain
    u0_a11 756 152 2 com.google.process.gapps
    u0_a3 783 152 0 com.android.providers.calendar
    u0_a11 789 152 9 com.google.android.gms.persistent
    u0_a11 846 152 8 com.google.android.gms
    system 851 152 0 com.google.android.things.internal.bluetooth
    root 908 2 0 [kworker/2:1H]
    u0_a11 954 152 1 com.google.android.gms.ui
    u0_a11 973 152 7 com.google.android.gms.unstable
    u0_a4 998 152 1 android.process.acore
    root 1027 2 0 [kworker/3:1H]
    u0_a5 1070 152 0 android.ext.services
  • rpi3:/ $ getprop
[camera.disable_zsl_mode]: [1]
[crash_reporter.coredump.enabled]: [1]
[dalvik.vm.appimageformat]: [lz4]
[dalvik.vm.dex2oat-Xms]: [64m]
[dalvik.vm.dex2oat-Xmx]: [512m]
[dalvik.vm.dexopt.secondary]: [true]
[dalvik.vm.heapsize]: [256m]
[dalvik.vm.image-dex2oat-Xms]: [64m]
[dalvik.vm.image-dex2oat-Xmx]: [64m]
[dalvik.vm.isa.arm.features]: [default]
[dalvik.vm.isa.arm.variant]: [generic]
[dalvik.vm.lockprof.threshold]: [500]
[dalvik.vm.stack-trace-file]: [/data/anr/traces.txt]
[dalvik.vm.usejit]: [true]
[dalvik.vm.usejitprofiles]: [true]
[debug.atrace.tags.enableflags]: [0]
[debug.force_rtl]: [0]
[debug.input.timeout_mode]: [none]
[dev.bootcomplete]: [1]
[hwservicemanager.ready]: [true]
[init.svc.adbd]: [running]
[init.svc.audio-hal-2-0]: [running]
[init.svc.audioserver]: [running]
[init.svc.bluetooth-1-0]: [running]
[init.svc.boot-hal-1-0]: [running]
[init.svc.bootanim]: [stopped]
[init.svc.cameraserver]: [running]
[init.svc.configstore-hal-1-0]: [running]
[init.svc.console]: [running]
[init.svc.crash_reporter]: [stopped]
[init.svc.crash_sender]: [running]
[init.svc.drm]: [running]
[init.svc.gatekeeperd]: [running]
[init.svc.gralloc-2-0]: [running]
[init.svc.healthd]: [running]
[init.svc.hidl_memory]: [running]
[init.svc.hostapd]: [stopped]
[init.svc.hwcomposer-2-1]: [running]
[init.svc.hwservicemanager]: [running]
[init.svc.inputdriverserv]: [running]
[init.svc.installd]: [running]
[init.svc.keymaster-3-0]: [running]
[init.svc.keystore]: [running]
[init.svc.lmkd]: [running]
[init.svc.logd]: [running]
[init.svc.logd-reinit]: [stopped]
[init.svc.mdnsd]: [running]
[init.svc.media]: [running]
[init.svc.mediacodec]: [running]
[init.svc.mediadrm]: [running]
[init.svc.mediaextractor]: [running]
[init.svc.mediametrics]: [running]
[init.svc.metricscollector]: [running]
[init.svc.metricsd]: [running]
[init.svc.netd]: [running]
[init.svc.peripheralman]: [running]
[init.svc.power-hal-1-0]: [running]
[init.svc.sensors-hal-1-0]: [running]
[init.svc.servicemanager]: [running]
[init.svc.storaged]: [running]
[init.svc.surfaceflinger]: [running]
[init.svc.tombstoned]: [running]
[init.svc.ueventd]: [running]
[init.svc.update_engine]: [running]
[init.svc.usb-hal-1-0]: [running]
[init.svc.vndservicemanager]: [running]
[init.svc.vold]: [running]
[init.svc.webview_zygote32]: [running]
[init.svc.wifi_hal_legacy]: [running]
[init.svc.wificond]: [running]
[init.svc.wpa_supplicant]: [stopped]
[init.svc.zygote]: [running]
[log.tag.Hyphenator]: [SUPPRESS]
[log.tag.WifiHAL]: [D]
[logd.logpersistd.enable]: [true]
[net.bt.name]: [Android]
[net.dns1]: [198.142.152.164]
[net.dns2]: [198.142.152.165]
[net.qtaguid_enabled]: [1]
[net.tcp.default_init_rwnd]: [60]
[persist.media.treble_omx]: [false]
[persist.sys.dalvik.vm.lib.2]: [libart.so]
[persist.sys.profiler_ms]: [0]
[persist.sys.timezone]: [GMT]
[persist.sys.ui.hw]: [disable]
[persist.sys.usb.config]: [adb]
[persist.sys.webview.vmsize]: [104857600]
[pm.dexopt.ab-ota]: [speed-profile]
[pm.dexopt.bg-dexopt]: [speed-profile]
[pm.dexopt.boot]: [verify]
[pm.dexopt.first-boot]: [quicken]
[pm.dexopt.install]: [quicken]
[ro.allow.mock.location]: [0]
[ro.baseband]: [unknown]
[ro.board.platform]: []
[ro.boot.hardware]: [rpi3]
[ro.boot.selinux]: [permissive]
[ro.boot.serialno]: [00000000b359057f]
[ro.boot.slot_suffix]: [_a]
[ro.bootimage.build.date]: [Fri Aug 18 02:52:42 UTC 2017]
[ro.bootimage.build.date.utc]: [1503024762]
[ro.bootimage.build.fingerprint]: [Things/iot_rpi3/rpi3:8.0.0/OIR1.170720.017/4284968:userdebug/test-keys]
[ro.bootloader]: [unknown]
[ro.bootmode]: [unknown]
[ro.boottime.adbd]: [7601300153]
[ro.boottime.audio-hal-2-0]: [7563352809]
[ro.boottime.audioserver]: [7604559893]
[ro.boottime.bluetooth-1-0]: [7565701611]
[ro.boottime.boot-hal-1-0]: [6893695518]
[ro.boottime.bootanim]: [10393633329]
[ro.boottime.cameraserver]: [7607383278]
[ro.boottime.configstore-hal-1-0]: [7568247705]
[ro.boottime.console]: [7598286143]
[ro.boottime.crash_reporter]: [7654761924]
[ro.boottime.crash_sender]: [7657323851]
[ro.boottime.drm]: [7611007445]
[ro.boottime.gatekeeperd]: [7660283382]
[ro.boottime.gralloc-2-0]: [7570879059]
[ro.boottime.healthd]: [7587036195]
[ro.boottime.hidl_memory]: [7560812861]
[ro.boottime.hwcomposer-2-1]: [7573384632]
[ro.boottime.hwservicemanager]: [6644135778]
[ro.boottime.init]: [3799]
[ro.boottime.init.cold_boot_wait]: [374]
[ro.boottime.init.mount_all.default]: [1999]
[ro.boottime.init.selinux]: [150]
[ro.boottime.inputdriverserv]: [7662961090]
[ro.boottime.installd]: [7613993122]
[ro.boottime.keymaster-3-0]: [6896106716]
[ro.boottime.keystore]: [7617538590]
[ro.boottime.lmkd]: [7589607393]
[ro.boottime.logd]: [6618810414]
[ro.boottime.logd-reinit]: [7497335361]
[ro.boottime.mdnsd]: [8393546767]
[ro.boottime.media]: [7638129893]
[ro.boottime.mediacodec]: [7649625518]
[ro.boottime.mediadrm]: [7628786507]
[ro.boottime.mediaextractor]: [7632200413]
[ro.boottime.mediametrics]: [7634985361]
[ro.boottime.metricscollector]: [7665596403]
[ro.boottime.metricsd]: [7668492132]
[ro.boottime.netd]: [7286089424]
[ro.boottime.peripheralman]: [7640988070]
[ro.boottime.power-hal-1-0]: [7575882080]
[ro.boottime.sensors-hal-1-0]: [7578534059]
[ro.boottime.servicemanager]: [6641712810]
[ro.boottime.storaged]: [7643825153]
[ro.boottime.surfaceflinger]: [7592422497]
[ro.boottime.tombstoned]: [7726276455]
[ro.boottime.ueventd]: [4198132029]
[ro.boottime.update_engine]: [7729457809]
[ro.boottime.usb-hal-1-0]: [7581457757]
[ro.boottime.vndservicemanager]: [6648329008]
[ro.boottime.vold]: [6899477601]
[ro.boottime.webview_zygote32]: [28021782541]
[ro.boottime.wifi_hal_legacy]: [7584342080]
[ro.boottime.wificond]: [7646544840]
[ro.boottime.zygote]: [7288532289]
[ro.build.ab_update]: [true]
[ro.build.characteristics]: [embedded]
[ro.build.date]: [Fri Aug 18 02:52:42 UTC 2017]
[ro.build.date.utc]: [1503024762]
[ro.build.description]: [iot_rpi3-userdebug 8.0.0 OIR1.170720.017 4284968 test-keys]
[ro.build.display.id]: [iot_rpi3-userdebug 8.0.0 OIR1.170720.017 4284968 test-keys]
[ro.build.fingerprint]: [Things/iot_rpi3/rpi3:8.0.0/OIR1.170720.017/4284968:userdebug/test-keys]
[ro.build.flavor]: [iot_rpi3-userdebug]
[ro.build.host]: [vpef5.mtv.corp.google.com]
[ro.build.id]: [OIR1.170720.017]
[ro.build.product]: [rpi3]
[ro.build.system_root_image]: [true]
[ro.build.tags]: [test-keys]
[ro.build.type]: [userdebug]
[ro.build.user]: [android-build]
[ro.build.version.all_codenames]: [REL]
[ro.build.version.base_os]: []
[ro.build.version.codename]: [REL]
[ro.build.version.incremental]: [4284968]
[ro.build.version.preview_sdk]: [0]
[ro.build.version.release]: [8.0.0]
[ro.build.version.sdk]: [26]
[ro.build.version.security_patch]: [2017-08-05]
[ro.carrier]: [unknown]
[ro.config.alarm_alert]: [Alarm_Classic.ogg]
[ro.config.notification_sound]: [OnTheHunt.ogg]
[ro.crypto.state]: [unsupported]
[ro.dalvik.vm.native.bridge]: [0]
[ro.debuggable]: [1]
[ro.hardware]: [rpi3]
[ro.hardware.audio.primary]: [iot]
[ro.hardware.camera]: [v4l2]
[ro.hardware.gps]: [iot]
[ro.hardware.gralloc]: [gbm]
[ro.hardware.hwcomposer]: [drm]
[ro.hardware.sensors]: [iot]
[ro.metricsd.product_id]: [android-things:r2nfla]
[ro.metricsd.product_version]: [0]
[ro.persistent_properties.ready]: [true]
[ro.product.board]: [rpi3]
[ro.product.brand]: [Things]
[ro.product.cpu.abi]: [armeabi-v7a]
[ro.product.cpu.abi2]: [armeabi]
[ro.product.cpu.abilist]: [armeabi-v7a,armeabi]
[ro.product.cpu.abilist32]: [armeabi-v7a,armeabi]
[ro.product.cpu.abilist64]: []
[ro.product.device]: [rpi3]
[ro.product.manufacturer]: [Google]
[ro.product.model]: [iot_rpi3]
[ro.product.name]: [iot_rpi3]
[ro.property_service.version]: [2]
[ro.radio.noril]: [yes]
[ro.revision]: [0]
[ro.rfkilldisabled]: [1]
[ro.runtime.firstboot]: [1230768023099]
[ro.secure]: [1]
[ro.serialno]: [00000000b359057f]
[ro.sf.lcd_density]: [240]
[ro.treble.enabled]: [false]
[ro.wifi.channels]: []
[ro.zygote]: [zygote32]
[security.perf_harden]: [1]
[service.bootanim.exit]: [1]
[service.sf.present_timestamp]: [1]
[sys.boot_completed]: [1]
[sys.logbootcomplete]: [1]
[sys.rescue_boot_count]: [1]
[sys.sysctl.extra_free_kbytes]: [24300]
[sys.sysctl.tcp_def_init_rwnd]: [60]
[sys.usb.config]: [adb]
[sys.usb.configfs]: [0]
[sys.usb.state]: [adb]
[sys.wifitracing.started]: [1]
[vold.has_adoptable]: [0]
[wifi.interface]: [wlan0]
[wifi.supplicant_scan_interval]: [15]
  • HDMI Display
Initially, monitor says "no cable connected" but after switching the source back and force for my monitor and at the time I'm about to give up, android things home screen show up.
1920x1080 is used and that the resolution negotiation between monitor and the board is working properly.
rpi3:/ $ dumpsys SurfaceFlinger
Build configuration: [sf HAS_CONTEXT_PRIORITY=0 DISABLE_TRIPLE_BUFFERING PRESENT_TIME_OFFSET=0 FORCE_HWC_FOR_RBG_TO_YUV=0 MAX_VIRT_DISPLAY_DIM=0 RUNNING_WITHOUT_SYNC_FRAMEWORK=0 NUM_FRAMEBUFFER_SURFACE_BUFFERS=2] [libui] [libgui]

Wide-Color information:
hasWideColorDisplay: 0
Display 0 color modes:
HAL_COLOR_MODE_NATIVE (0)
Current color mode: HAL_COLOR_MODE_NATIVE (0)

Sync configuration: [using: EGL_KHR_fence_sync EGL_KHR_wait_sync]
DispSync configuration: app phase 1000000 ns, sf phase 1000000 ns, present offset 0 ns (refresh 16666667 ns)

Buffering stats:
[Layer name]

Visible layers (count = 1)
+ Layer 0xb2dcd000 (com.android.iotlauncher/com.android.iotlauncher.IoTLauncher#0)
Region transparentRegion (this=0xb2dcd278, count=1)
[ 0, 0, 0, 0]
Region visibleRegion (this=0xb2dcd008, count=1)
[ 0, 0, 1920, 1080]
Region surfaceDamageRegion (this=0xb2dcd044, count=1)
[ 0, 0, -1, -1]
layerStack= 0, z= 21000, pos=(0,0), size=(1920,1080), crop=( 0, 0,1920,1080), finalCrop=( 0, 0, -1, -1), isOpaque=1, invalidate=0, alpha=1.000, flags=0x00000002, tr=[1.00, 0.00][0.00, 1.00]
client=0xb4b1c140
format= 2, activeBuffer=[1920x1080:1920, 2], queued-frames=0, mRefreshPending=0
mTexName=5 mCurrentTexture=0
mCurrentCrop=[0,0,0,0] mCurrentTransform=0
mAbandoned=0
-BufferQueue mMaxAcquiredBufferCount=1, mMaxDequeuedBufferCount=2, mDequeueBufferCannotBlock=0 mAsyncMode=0, default-size=[1920x1080], default-format=2, transform-hint=00, FIFO(0)={}
>[00:0xb4b0ea00] state=ACQUIRED, 0xb4b0ebe0 [1920x1080:1920, 2]
[01:0xb4b0ec80] state=FREE , 0xb4b0ee60 [1920x1080:1920, 2]
[02:0x0] state=FREE
Displays (1 entries)
+ DisplayDevice: Built-in Screen
type=0, hwcId=0, layerStack=0, (1920x1080), ANativeWindow=0xb517c008, orient= 0 (type=00000000), flips=70, isSecure=1, powerMode=2, activeConfig=0, numLayers=1
v:[0,0,1920,1080], f:[0,0,1920,1080], s:[0,0,1920,1080],transform:[[1.000,0.000,-0.000][0.000,1.000,-0.000][0.000,0.000,1.000]]
mAbandoned=0
-BufferQueue mMaxAcquiredBufferCount=1, mMaxDequeuedBufferCount=1, mDequeueBufferCannotBlock=0 mAsyncMode=0, default-size=[1920x1080], default-format=1, transform-hint=00, FIFO(0)={}
[00:0xb514cdc0] state=DEQUEUED, 0xb514e620 [1920x1080:1920, 1]
[01:0x0] state=FREE
SurfaceFlinger global state:
EGL implementation : 1.4 (DRI2)
EGL_ANDROID_framebuffer_target EGL_ANDROID_image_native_buffer EGL_ANDROID_recordable EGL_EXT_image_dma_buf_import EGL_KHR_cl_event2 EGL_KHR_config_attribs EGL_KHR_create_context EGL_KHR_fence_sync EGL_KHR_get_all_proc_addresses EGL_KHR_gl_colorspace EGL_KHR_gl_renderbuffer_image EGL_KHR_gl_texture_2D_image EGL_KHR_gl_texture_3D_image EGL_KHR_gl_texture_cubemap_image EGL_KHR_image_base EGL_KHR_no_config_context EGL_KHR_reusable_sync EGL_KHR_surfaceless_context EGL_KHR_wait_sync EGL_MESA_configless_context EGL_MESA_drm_image EGL_MESA_image_dma_buf_export
GLES: Broadcom, Gallium 0.4 on VC4 V3D 2.1, OpenGL ES 2.0 Mesa 17.0.4
GL_EXT_debug_marker GL_EXT_blend_minmax GL_EXT_multi_draw_arrays GL_EXT_texture_format_BGRA8888 GL_OES_compressed_ETC1_RGB8_texture GL_OES_depth24 GL_OES_element_index_uint GL_OES_fbo_render_mipmap GL_OES_mapbuffer GL_OES_rgb8_rgba8 GL_OES_stencil8 GL_OES_texture_3D GL_OES_texture_npot GL_OES_vertex_half_float GL_OES_EGL_image GL_OES_depth_texture GL_OES_packed_depth_stencil GL_EXT_texture_type_2_10_10_10_REV GL_OES_get_program_binary GL_APPLE_texture_max_level GL_EXT_discard_framebuffer GL_EXT_read_format_bgra GL_NV_fbo_color_attachments GL_OES_EGL_image_external GL_OES_EGL_sync GL_OES_vertex_array_object GL_EXT_unpack_subimage GL_NV_draw_buffers GL_NV_read_buffer GL_NV_read_depth GL_NV_read_depth_stencil GL_NV_read_stencil GL_EXT_draw_buffers GL_EXT_map_buffer_range GL_KHR_debug GL_OES_surfaceless_context GL_EXT_separate_shader_objects GL_EXT_draw_elements_base_vertex GL_EXT_texture_border_clamp GL_KHR_context_flush_control GL_OES_draw_elements_base_vertex GL_OES_texture_border_clamp
Wide-color: Off
Region undefinedRegion (this=0xb51477ac, count=1)
[ 0, 0, 0, 0]
orientation=0, isDisplayOn=1
last eglSwapBuffers() time: 494.270000 us
last transaction time : 64.688000 us
transaction-flags : 00000000
refresh-rate : 59.999999 fps
x-dpi : 92.014999
y-dpi : 91.440002
gpu_to_cpu_unsupported : 0
eglSwapBuffers time: 0.000000 us
transaction time: 0.000000 us
VSYNC state: disabled
soft-vsync: disabled
numListeners=12,
events-delivered: 75
0xb2e66c50: count=-1
0xb2e66da0: count=-1
0xb2e66ec0: count=-1
0xb4b19060: count=-1
0xb4b190c0: count=-1
0xb4b190f0: count=-1
0xb4b19120: count=-1
0xb4b19150: count=-1
0xb4b191b0: count=-1
0xb511f4b0: count=-1
0xb511f4e0: count=-1
0xb511f510: count=-1

Display 0 HWC layers:
-------------------------------------------------------------------------------
Layer name
Z | Comp Type | Disp Frame (LTRB) | Source Crop (LTRB)
-------------------------------------------------------------------------------
com.android.iotlauncher/com.android.iotlauncher.IoTLauncher#0
21000 | Device | 0 0 1920 1080 | 0.0 0.0 1920.0 1080.0
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

h/w composer state:
h/w composer enabled
Allocated buffers:
0xb2e75e20: 8100.00 KiB | 1920 (1920) x 1080 | 1 | 1 | 0x1a00 |
0xb3016f00: 8100.00 KiB | 1920 (1920) x 1080 | 1 | 1 | 0x1a00 |
0xb4b0ebe0: 8100.00 KiB | 1920 (1920) x 1080 | 1 | 2 | 0x933 | com.android.iotlauncher/com.android.iotlauncher.IoTLauncher#0
0xb4b0ee60: 8100.00 KiB | 1920 (1920) x 1080 | 1 | 2 | 0x933 | com.android.iotlauncher/com.android.iotlauncher.IoTLauncher#0
0xb514e620: 8100.00 KiB | 1920 (1920) x 1080 | 1 | 1 | 0x1a00 | FramebufferSurface
0xb514f3e0: 8100.00 KiB | 1920 (1920) x 1080 | 1 | 1 | 0x1a00 |
Total allocated (estimate): 48600.00 KB
  • partitions
still same. no change.
rpi3:/ $ ls /dev/block/platform/soc/3f202000.sdhost/by-name/ -l
total 0
lrwxrwxrwx 1 root root 20 1970-01-01 00:00 boot_a -> /dev/block/mmcblk0p4
lrwxrwxrwx 1 root root 20 1970-01-01 00:00 boot_b -> /dev/block/mmcblk0p5
lrwxrwxrwx 1 root root 21 1970-01-01 00:00 gapps_a -> /dev/block/mmcblk0p13
lrwxrwxrwx 1 root root 21 1970-01-01 00:00 gapps_b -> /dev/block/mmcblk0p14
lrwxrwxrwx 1 root root 21 1970-01-01 00:00 misc -> /dev/block/mmcblk0p10
lrwxrwxrwx 1 root root 21 1970-01-01 00:00 oem_a -> /dev/block/mmcblk0p11
lrwxrwxrwx 1 root root 21 1970-01-01 00:00 oem_b -> /dev/block/mmcblk0p12
lrwxrwxrwx 1 root root 20 1970-01-01 00:00 rpiboot -> /dev/block/mmcblk0p1
lrwxrwxrwx 1 root root 20 1970-01-01 00:00 system_a -> /dev/block/mmcblk0p6
lrwxrwxrwx 1 root root 20 1970-01-01 00:00 system_b -> /dev/block/mmcblk0p7
lrwxrwxrwx 1 root root 20 1970-01-01 00:00 uboot_a -> /dev/block/mmcblk0p2
lrwxrwxrwx 1 root root 20 1970-01-01 00:00 uboot_b -> /dev/block/mmcblk0p3
lrwxrwxrwx 1 root root 21 1970-01-01 00:00 userdata -> /dev/block/mmcblk0p15
lrwxrwxrwx 1 root root 20 1970-01-01 00:00 vbmeta_a -> /dev/block/mmcblk0p8
lrwxrwxrwx 1 root root 20 1970-01-01 00:00 vbmeta_b -> /dev/block/mmcblk0p9
  • init.rpi3.rc
on fs
mount_all /fstab.${ro.hardware}

on early-init
mount debugfs debugfs /sys/kernel/debug mode=0755

service dhcpcd_wlan0 /system/bin/dhcpcd -dABKL
group dhcp
disabled
oneshot

service wpa_supplicant /vendor/bin/hw/wpa_supplicant \
-iwlan0 -Dnl80211 -c/data/misc/wifi/wpa_supplicant.conf \
-I/system/etc/wifi/wpa_supplicant_overlay.conf \
-O/data/misc/wifi/sockets \
-e/data/misc/wifi/entropy.bin -g@android:wpa_wlan0
class main
socket wpa_wlan0 dgram 660 wifi wifi
disabled
oneshot

on boot
chmod 0666 /dev/sw_sync

# Add a cpuset for the camera daemon
# We want all cores for camera
mkdir /dev/cpuset/camera-daemon
write /dev/cpuset/camera-daemon/cpus 0-3
write /dev/cpuset/camera-daemon/mems 0
chown system system /dev/cpuset/camera-daemon
chown system system /dev/cpuset/camera-daemon/tasks
chmod 0644 /dev/cpuset/camera-daemon/tasks
  • rpi3:/ $ cat ueventd.rpi3.rc
    /dev/video*               0660   system     camera
    /dev/ttyAMA0 0660 bluetooth bluetooth
    /dev/rfkill 0660 bluetooth bluetooth
    /sys/class/rfkill/rfkill0/state 0660 bluetooth bluetooth
  • bluetooth
Bluetooth is disabled by default, but we can enable it from cli.
rpi3:/ # dumpsys bluetooth_manager       
Bluetooth Status
enabled: false
state: OFF
address: null
name: RPI3
Bluetooth never enabled!
Bluetooth crashed 0 times

No BLE Apps registered.

Bluetooth Service not connected
rpi3:/ # service call bluetooth_manager 6  
Result: Parcel(00000000 00000001 '........')

rpi3:/ # dumpsys bluetooth_manager
Bluetooth Status
enabled: true
state: ON
address: 22:22:ED:D0:75:9F
name: RPI3
time since enabled: 00:00:02.555

Enable log:
08-30 04:46:18 Enabled by null
Bluetooth crashed 0 times

No BLE Apps registered.

Bonded devices:

Profile: BtGatt.GattService
mAdvertisingServiceUuids:
mMaxScanFilters: 0

GATT Scanner Map
Entries: 0

GATT Client Map
Entries: 0

GATT Server Map
Entries: 0

GATT Handle Map
Entries: 0
Requests: 0

Profile: HeadsetService
mCurrentDevice: null
mTargetDevice: null
mIncomingDevice: null
mActiveScoDevice: null
mMultiDisconnectDevice: null
mVirtualCallStarted: false
mVoiceRecognitionStarted: false
mWaitingForVoiceRecognition: false
StateMachine: HeadsetStateMachine:
total records=2
rec[0]: time=08-30 04:46:20.124 processed=Disconnected org=Disconnected dest=<null> what=10(0xa)
rec[1]: time=08-30 04:46:20.124 processed=<null> org=Disconnected dest=<null> what=11(0xb)
curState=Disconnected

mPhoneState: com.android.bluetooth.hfp.HeadsetPhoneState@ce70fe4
mAudioState: 10

Profile: A2dpSinkService
mCurrentDevice: null
mTargetDevice: null
mIncomingDevice: null
StateMachine: A2dpSinkStateMachine:
total records=0
curState=Disconnected


Profile: HidService
mTargetDevice: null
mInputDevices:

Profile: HealthService
mHealthChannels:
mApps:
mHealthDevices:

Profile: PanService
mMaxPanDevices: 5
mPanIfName: bt-pan
mTetherOn: false
mPanDevices:

Profile: BluetoothMapService
mRemoteDevice: null
sRemoteDeviceName: null
mState: 0
mAppObserver: com.android.bluetooth.map.BluetoothMapAppObserver@3916c4d
mIsWaitingAuthorization: false
mRemoveTimeoutMsg: false
mPermission: 0
mAccountChanged: false
mBluetoothMnsObexClient: null
mMasInstanceMap:
null : MasId: 0 Uri:null SMS/MMS:true
mEnabledAccounts:

Profile: AvrcpControllerService
StateMachine: AvrcpControllerSM:
total records=0
curState=Disconnected


Profile: BluetoothPbapService

Connection Events:
None

Bond Events:
Total Number of events: 0

A2DP State:
TxQueue:
Counts (enqueue/dequeue/readbuf) : 0 / 0 / 0
Last update time ago in ms (enqueue/dequeue/readbuf) : 0 / 0 / 0
Frames per packet (total/max/ave) : 0 / 0 / 0
Counts (flushed/dropped/dropouts) : 0 / 0 / 0
Counts (max dropped) : 0
Last update time ago in ms (flushed/dropped) : 0 / 0
Counts (underflow) : 0
Bytes (underflow) : 0
Last update time ago in ms (underflow) : 0
Enqueue deviation counts (overdue/premature) : 0 / 0
Enqueue overdue scheduling time in ms (total/max/ave) : 0 / 0 / 0
Enqueue premature scheduling time in ms (total/max/ave) : 0 / 0 / 0
Dequeue deviation counts (overdue/premature) : 0 / 0
Dequeue overdue scheduling time in ms (total/max/ave) : 0 / 0 / 0
Dequeue premature scheduling time in ms (total/max/ave) : 0 / 0 / 0

A2DP Codecs State:
Current Codec: None

A2DP LDAC State:
Priority: 5001
Encoder interval (ms): 20
Config: Invalid
Selectable: Invalid
Local capability: Rate=44100|48000|88200|96000 Bits=16|24|32 Mode=STEREO
Packet counts (expected/dropped) : 0 / 0
PCM read counts (expected/actual) : 0 / 0
PCM read bytes (expected/actual) : 0 / 0
LDAC quality mode : HIGH
LDAC transmission bitrate (Kbps) : -1
LDAC saved transmit queue length : 0

A2DP AAC State:
Priority: 2001
Encoder interval (ms): 20
Config: Invalid
Selectable: Invalid
Local capability: Rate=44100 Bits=16 Mode=STEREO
Packet counts (expected/dropped) : 0 / 0
PCM read counts (expected/actual) : 0 / 0
PCM read bytes (expected/actual) : 0 / 0

A2DP SBC State:
Priority: 1001
Encoder interval (ms): 20
Config: Invalid
Selectable: Invalid
Local capability: Rate=44100 Bits=16 Mode=STEREO
Packet counts (expected/dropped) : 0 / 0
PCM read counts (expected/actual) : 0 / 0
PCM read bytes (expected/actual) : 0 / 0
Frames counts (expected/dropped) : 0 / 0

Bluetooth Config:
Config Source: New file
Devices loaded: 0
File created/tagged: 2017-08-30 04:46:18
File source: Empty

Bluetooth HF Client BTA Statistics

Bluetooth Wakelock Statistics:
Is acquired : true
Acquired/released count : 22 / 21
Acquired/released error count : 0 / 0
Last acquire/release error code: 0 / 0
Last acquired time (ms) : 1019
Acquired time min/max/avg (ms) : 1 / 1019 / 52
Total acquired time (ms) : 1160
Total run time (ms) : 1196

Bluetooth Memory Allocation Statistics:
Total allocated/free/used counts : 1172 / 661 / 511
Total allocated/free/used octets : 370075 / 79808 / 290267

Bluetooth Alarms Statistics:
Total Alarms: 2

Alarm : btif.config (SINGLE)
Action counts (sched/resched/exec/cancel) : 5 / 0 / 0 / 0
Deviation counts (overdue/premature) : 0 / 0
Time in ms (since creation/interval/remaining) : 1008 / 3000 / 1992
Callback execution time in ms (total/max/avg) : 0 / 0 / 0
Overdue scheduling time in ms (total/max/avg) : 0 / 0 / 0
Premature scheduling time in ms (total/max/avg): 0 / 0 / 0

Alarm : btm_ble_addr.refresh_raddr_timer (SINGLE)
Action counts (sched/resched/exec/cancel) : 2 / 0 / 0 / 0
Deviation counts (overdue/premature) : 0 / 0
Time in ms (since creation/interval/remaining) : 1016 / 900000 / 898984
Callback execution time in ms (total/max/avg) : 0 / 0 / 0
Overdue scheduling time in ms (total/max/avg) : 0 / 0 / 0
Premature scheduling time in ms (total/max/avg): 0 / 0 / 0

--- BEGIN:BTSNOOP_LOG_SUMMARY (5757 bytes in) ---
Al3YizXxVwUAeJzll19oHEUcx3+zs3u7l5u720tycg3xstZYfagxvQ0aQ8JpY0lSSQ1HW1CLtBYfDARriqipUB+aFhqoKIhCwHootvrgk4gWShXtH0RI9MEXRV9EbLGI
BWlKY9bfb2Z6eyOlPpZm9/cQJt/v/GbmM7/73Z4NNtATcAEuxpU0gJ+zGQ5tjNMplBwfchhFi6QMw+E//CEPGGQwznvoCIU7n4raoUXm+MVSOUKVo58czIc8RklKguHQ
/RTcPKTXkuUkrhqkfchiqGVaGA6PvrB4ca00PE8Gy4cRjDsdMjzCcBhFEf82+hP//NF1Kf/7SvXya1En2nddjqL9S/jvKtzwcTBupyPaPoMCxl1y8RyzfQDr1MrCyo/R
vkPkmqVDbMeN0wF3HGHygNsFpDDKLmpTwlLioD79lJDpj6ZVekbpt8TpmeU2bWJMuyxy9cQuy2rjsatCrhm9iXv0OjOK8nrS8gF4GEpyGA5fJS0j6QV4PS2Ql5rLcHgV
PFLfIrUzkPezRt8PDiN6cHGyPE4Wrixj2oLDLn0CgbFb3nLg7S9b8n+0xRf1FlkgLXVpEZ7MHJ0aIsvr1yzqFIO0UEbVSlEvhMMU4uK2IyGsF2jZJIifC7N6+iYF+xIW
czCiAd2ttRGlPUhVPqy1i0xpw6rqN5DWLXi73U3Dsp7ZrTZVIbWkTl/Vm8Jh5sI3OwaXzl+lEpAJSsKCgOYv6uwlVR9yft6zHLnWOS3mPZlcbqtdQISxVZC0zHC4cXi8
L+wL+x/eEIQP9PSNj84EtV179zz9zPT0y8HEs0F4b29vX/jYjcv7FnwIyRDEvDtj3q1fPPflhfZjdyzDMpxtQUubWKpNjIU3e8s346FCKmVVIbUJ+BvjMPWxmviLOWmi
wjhzmPv/mW79h1g4mkVNfWLfhbiC+uMKmsy96dePfPcVWbbCdYusXnjjlbk5Pk+WiSZLJbYcGux6cueZ4+eomz5BFidIjf46NJnbRjsJdO9wVLsd1Ya3Zz+uF2pkOCwa
Blqk3rRIT7zItP1D7/dP7anSzW7OGDfLObQm424J1wnRuFnqpVs+RxbrhLVTij9p2OvUR2AaDFAOby0kAxWx+CTXAEUsFsFg4SKLrkTQIBYLwmAxWTFYpLkoJIMGsRjI
GiwWwGCR4ZlCMmgQi06TxWdgsMhKFrkE0CAWv8WfEfrN0PEBshgQ+Ktt7h2S01zJA0qexVlBWdjgtcrZ+3TnLauX6xmSO/TL9aNa61DaAa1x+fXIGxqlff9aWl+lDbNG
2p/1VPmuv5cbad/TmpxXjTW62HkwLjbfuNjbVvnVEos1ZpF/BAaLQoNFcZXTIBaeyeIsJJhFwWTxNSSYxTHzBWEKEsxiY9H4ItjW4xmdfoAbLfk4NLXdcbPTH4Smdr3Z
bMkfHjQQt/0H8X2rFrJE7Bvldv8Bg0XxOiwqhdVIg1i8FLehfwGzhw7p
--- END:BTSNOOP_LOG_SUMMARY ---
  • supported features
rpi3:/system/etc/permissions # ls -l
total 100
-rw-r--r-- 1 root root 820 android.hardware.bluetooth.xml
-rw-r--r-- 1 root root 830 android.hardware.bluetooth_le.xml
-rw-r--r-- 1 root root 933 android.hardware.camera.external.xml
-rw-r--r-- 1 root root 834 android.hardware.ethernet.xml
-rw-r--r-- 1 root root 942 android.hardware.location.gps.xml
-rw-r--r-- 1 root root 868 android.hardware.usb.host.xml
-rw-r--r-- 1 root root 829 android.hardware.wifi.xml
-rw-r--r-- 1 root root 748 android.software.webview.xml
-rw-r--r-- 1 root root 828 com.android.location.provider.xml
-rw-r--r-- 1 root root 828 com.android.media.remotedisplay.xml
-rw-r--r-- 1 root root 820 com.android.mediadrm.signer.xml
-rw-r--r-- 1 root root 977 iot_features.xml
-rw-r--r-- 1 root root 8786 platform.xml
-rw-r--r-- 1 root root 19690 privapp-permissions-google.xml
-rw-r--r-- 1 root root 20273 privapp-permissions-platform.xml
cat ot_features.xml:
<permissions>

<feature name="android.hardware.type.embedded" />


<library name="com.google.android.things"
file="/system/framework/com.google.android.things.jar" />

</permissions>
  • camera
rpi3:/system/vendor # dumpsys media.camera                                                                                                                                
== Service global info: ==

Number of camera devices: 0
Number of normal camera devices: 0
Active Camera Clients:
[]
Allowed user IDs: 0

== Camera service events log (most recent at top): ==
01-01 00:00:03 : USER_SWITCH previous allowed user IDs: , current allowed user IDs: 0

== Camera Provider HAL legacy/0 (v2.4, passthrough) static info: 0 devices: ==

== Vendor tags: ==

Dumping vendor tag descriptors for vendor with id 3854507339
Dumping configured vendor tag descriptors: None set

== Camera error traces (0): ==
No camera traces collected.
  • sensor
There are two sensor services, one is standard android one, the other is android things specific. Not sure the relationship between them.
rpi3:/system/vendor # service list | grep sensor  
73 sensorservice: [android.gui.SensorServer]
115 sensordriverservice: [com.google.android.things.userdriver.ISensorDriverService]

rpi3:/system/vendor # dumpsys sensorservice
Sensor Device:
Total 1 h/w sensors, 1 running:
0x00000001) active-count = 1; sampling_period(ms) = {1.0}, selected = 1.00 ms; batching_period(ms) = {0.0}, selected = 0.00 ms
Sensor List:
0x00000001) User-Driver Dynamic Meta-Sensor | Google | ver: 1 | type: android.sensor.dynamic_sensor_meta(32) | perm: n/a
special-trigger | maxDelay=0us | minDelay=0us | no batching | wakeUp |
Fusion States:
9-axis fusion disabled (0 clients), gyro-rate= 200.00Hz, q=< 0, 0, 0, 0 > (0), b=< 0, 0, 0 >
game fusion(no mag) disabled (0 clients), gyro-rate= 200.00Hz, q=< 0, 0, 0, 0 > (0), b=< 0, 0, 0 >
geomag fusion (no gyro) disabled (0 clients), gyro-rate= 200.00Hz, q=< 0, 0, 0, 0 > (0), b=< 0, 0, 0 >
Recent Sensor events:
Active sensors:
User-Driver Dynamic Meta-Sensor (handle=0x00000001, connections=1)
Socket Buffer size = 39 events
WakeLock Status: not held
Mode : NORMAL
1 active connections
Connection Number: 0
Operating Mode: NORMAL
com.android.server.SensorNotificationService | WakeLockRefCount 0 | uid 1000 | cache size 0 | max cache size 0
User-Driver Dynamic Meta-Sensor 0x00000001 | status: active | pending flush events 0
0 direct connections
Previous Registrations:
00:00:03 + 0x00000001 pid= 310 uid= 1000 package=com.android.server.SensorNotificationService samplingPeriod=0us batchingPeriod=0us

rpi3:/system/vendor # dumpsys sensordriverservice
# nothing

Android O HAL - Treble Architecture

Thing are of particular interests is libandroidthings*.so. That indicates that Android Things is more a platform extension, much less change in scope than the Brillo was set out to do.
total 1068
/system/vendor/lib/
|- camera.device@1.0-impl.so
|- camera.device@3.2-impl.so
|- libalsautils.so
|- libandroidthings.so
|- libandroidthings_jni.so
|- libbt-vendor.so
|- libdrm.so
|- libeffects.so
|- libhwc2on1adapter.so
|- libkeystore-engine-wifi-hidl.so
|- libkeystore-wifi-hidl.so
|- libril.so
|- libwifi-hal.so
|- libwpa_client.so
|- mediadrm
|- soundfx
|- hw
|- android.hardware.audio.effect@2.0-impl.so
|- android.hardware.audio@2.0-impl.so
|- android.hardware.bluetooth@1.0-impl.so
|- android.hardware.boot@1.0-impl.so
|- android.hardware.camera.provider@2.4-impl.so
|- android.hardware.gnss@1.0-impl.so
|- android.hardware.graphics.allocator@2.0-impl.so
|- android.hardware.graphics.composer@2.1-impl.so
|- android.hardware.graphics.mapper@2.0-impl.so
|- android.hardware.keymaster@3.0-impl.so
|- android.hardware.power@1.0-impl.so
|- android.hardware.sensors@1.0-impl.so
|- audio.primary.default.so
|- audio.r_submix.default.so
|- audio.usb.default.so
|- gralloc.default.so
|- gralloc.gbm.so
|- local_time.default.so
|- power.default.so
rpi3:/system/vendor/bin/hw # ls -l
root shell android.hardware.audio@2.0-service
bluetooth bluetooth android.hardware.bluetooth@1.0-service
root shell android.hardware.boot@1.0-service
root shell android.hardware.configstore@1.0-service
root shell android.hardware.graphics.allocator@2.0-service
root shell android.hardware.graphics.composer@2.1-service
root shell android.hardware.keymaster@3.0-service
root shell android.hardware.media.omx@1.0-service
root shell android.hardware.power@1.0-service
root shell android.hardware.sensors@1.0-service
root shell android.hardware.usb@1.0-service
wifi wifi android.hardware.wifi@1.0-service
root shell wpa_supplicant

by Bin Chen (noreply@blogger.com) at September 09, 2017 02:37

September 12, 2017

Siddhesh Poyarekar

Across the Charles Bridge - GNU Tools Cauldron 2017

Since I joined Linaro back in 2015 around this time, my travel has gone up 3x with 2 Linaro Connects a year added to the one GNU Tools Cauldron. This year I went to FOSSAsia too, so it’s been a busy traveling year. The special thing about Cauldron though is that it is one of those conferences where I ‘work’ as well as have a lot of fun. The fun bit is because I get to meet all of the people that I work with almost every day in person and a lot of them have become great friends over the years.

I still remember the first Cauldron I went to in 2013 at Mountain View where I felt dwarfed by all of the giants I was sitting with. It was exaggerated because it was the first time I met the likes of Jeff Law, Richard Henderson, etc. in personal meetings since I had joined the Red Hat toolchain team just months before; it was intimidating and exciting all at once. That was also the first time I met Roland McGrath (I still hadn’t met Carlos, he had just had a baby and couldn’t come), someone I was terrified of back then because his patch reviews would be quite sharp and incisive. I had imagined him to be a grim old man hammering out those words from a stern laptop, so it was a surprise to see him use the same kinds of words but with a sarcastic smile, completely changing the context and tone. That was the first time I truly realized how emails often lack context. Years later, I still try to visualize people when I read their emails.

Skip to 4 years later and I was at my 5th Cauldron last week and despite my assumptions on how it would go, it was a completely new experience. A lot of it had to do with my time at Linaro and very little to do with technical growth. I felt like an equal to Linaro folks all over the world and I seemed to carry that forward here, where I felt like an equal with all of the people present, I felt like I belonged. I did not feel insecure about my capabilities (I still am intimately aware of my limitations), nor did I feel the need to constantly prove that I belonged. I was out there seeking toolchain developers (we are hiring btw, email me if you’re a fit), comfortable with the idea of leading a team. The fact that I managed to not screw up the two glibc releases I managed may also have helped :)

Oh, and one wonderful surprise was that an old friend decided to drop in an Cauldron and spend a couple of days.

This year’s Cauldron had the most technical talks submitted in recent years. We had 5 talks in the glibc area, possibly also the highest for us; just as well because we went over time in almost all of them. I won’t say that it’s a surprise since that has happened in every single year that I attended. The first glibc talk was about tunables where I briefly recapped what we have done in tunables so far and talked about the future a bit more at length. Pedro Alves suggested putting pretty printers for tunables for introspection and maybe also for runtime tuning in the coming future. There was a significant amount of interest in the idea of auto-tuning, i.e. collecting profiling data about tunable use and coming up with optimal default values and possibly even eliminating such tunables in future if we find that we have a pretty good default. We also talked about tuning at runtime and the various kinds of support that would be required to make it happen. Finally there were discussions on tuning profiles and ideas around creating performance-enhanced routines for workloads instead of CPUs. The video recording of the talk will hopefully be out soon and I’ll link the video here when it is available.

Florian then talked about glibc 3.0, a notional concept (i.e. won’t be a soname bump) where we rewrite sections of code that have been rotting due to having to support some legacy platforms. The most prominent among them is libio, the module in glibc that implements stdio. When libio was written, it was designed to be compatible with libstdc++ so that FILE streams could be compatible with C++ stdio streams. The only version of gcc that really supports that is 2.95 since libstdc++ has since moved on. However because of the way we do things in glibc, we cannot get rid of them even if there is just one user that needs that ABI. We toyed with the concept of a separate compatibility library that becomes a graveyard for such legacy interfaces so that they don’t hold up progress in the library. It remains to be seen how this pans out, but I would definitely be happy to see this progress; libio was one of my backlog projects for years. I had to miss Raji’s talk on powerpc glibc improvements since I had to be in another meeting, so I’ll have to catch it when the video comes out.

The two BoFs for glibc dealt with a number of administrative and development issues, details of which Carlos will post on the mailing list soon. The highlights for me were the malloc instrumented benchmarks that Carlos wants to add to benchtests and build and review tools. Once I clear up my work backlog a bit, I’ll attempt to set up something like phabricator or gerrit and see how that works out or the community instead of patchwork. I am convinced that all of the issues that we want to solve like crediting reviewers, ensuring good git commit logs, running automated builds and tests, etc. can only be effectively solved with a proper review tool in place to review patches.

There was also a discussion on redoing the makefiles in glibc so that it doesn’t spend so much time doing dependecy resolution, but I am going to pretend that it didn’t happen because it is an ugly ugly task :/

I’m back home now, recovering from the cold that worsened while I was in Prague before I head out again in a couple of weeks to SFO for Linaro Connect. I’ve booked tickets for whale watching tours there, so hopefully I’ll be posting some pictures again after a long break.

by Siddhesh at September 09, 2017 06:16

September 09, 2017

Bin Chen

Cryptography


Cryptography is the fundamental technology underlying lots of hot topics nowadays, such as security of IoT system, or blockchain and its primary application - cryptocurrency (Bitcoin is one of them). Hence, a basic understanding of what problems are the classic cryptography are trying to solve is vital to get a genuine of appreciation of their today's trending applications.
I tried to fix that on myself, by taking a Cryptography course on Couresa. It is an excellent course, taught by Professor Dan Boneh, from Standford University. And it is free! It has everything you need on Cryptography, probably a little bit too much for most of us. I strongly encourage you to take a look at the course if you really want to understand what security means technically.
Below are the notes I took during the course. I'm not sure how useful it can be for others since by simpling looking at the notes the chance to understand the Cryptography is slim - you'll have to really have to think it through very hard.

Overview

Security Goal

  • Confidentiality : data is kept secret
  • Integrity : data isn't modified
  • Message Authentication : the sender of the message is authentic and the message itself isn't modified
  • nonrepudiation : sender of the message can't deny the creation of the message
For example: Alice send message to Bob
  • Confidentiality means only Alice and Bob understand the message
  • Integrity mean Bob can be sure the message haven't been modified
  • Authentication Bob is sure the message is send from Alice and message isn't modified
  • Nonrepudiation Alice can't deny it's she create the message
  • Confidentiality use Symmetric Cipher or less common Symmetric cipher
  • Integrity use MAC
  • Authentication use digital signature and MAC
  • Nonrepudiation use digital signature

Symmetric Cipher

  • Provide Confidentiality
  • Def enc(plaintext, key) -> cyphered text; dec(cypered_text, key) -> plain text
  • Stream Cipher
    • RC-4
  • Block Cypher
    • DES,3-DES
    • AES

MAC

  • Provide Integrity and Authentication
  • Base on either Symmetric Cipher or Hash

Asymmetric Cipher, (G,E,D)

  • Provide Confidentiality (but mainly for key exchange, not for long message) and Nonrepudiation
  • Def G: generate key pair, (sk, pk) E: enc(plaintext, public key) -> cyphered text; D: dec(cyphered text, private key) -> plain text
  • RSA <- factorization="" integer="" li="">
  • ElGamal <- discrete="" li="" logarithms="">
  • Ellie Curve <- curve="" ellie="" li="">

Digital Signature

  • Provide Nonrepudiation
  • Base on Asymmetric Cipher Or DSA
  • Theory: sender sign the message with it's private key, so it can't deny the creation of the message because only she has the private key.
  • Notes: private/security key is used to sign (or encode), public key is used to verify. Compared with encryption, public key is used encrypt, private key is used to decrypt.
sign with private key, decrypt with public key.

Key Establishment

  • Certificate
  • PKI

Stream Ciphers

  • Cipher (K,M,C)
    • E(k,m) = k XOR m
    • D(k,c) = k XOR c
  • One Time Pad (OPT) : a stream cypher for which
    1. key stream k1,k2,k3.. is random generated
    2. key stream bit ki is used only once
    3. key stream is known to legitimate parties only
    OPT is perfect secure, but requires the key stream generator to be true random and len(k) >= len(m), so it is hard to use in practice.
  • Pseudorandom Generators (PRG) Making OPT practical, by replace true random with pseudorandom. E(k,m) = m XOR G(k) D(k,c) = c XOR G(k) where G : {0,1} s size -> {0,1} n size
  • The Stream Cipher still alive - RC4 link

Block Cyphers

  • Confusion and Diffusion
    • Confusion: obscure key and value
    • Diffusion: one plaintext spread over all the cipher text -> 1 bit change in plain text result in big change in cipher text
  • Operate on a fixed block size, e.g 64bits, 128bits(AES)
  • Modes
    • Block Modes : making same block of plaintext output different cyphertext
      • ECB (Electronic Code Book)
      • CBC (Cipher Blocking Chain)
      • CFB (Cipher Feedback)
      • OFB (Output Feedback)
      • CTR (Counter)
      • OCB (Offset Codebook)
      • GCM
    • Padding Modes
      • PKCS7
  • DES, considered not secure nowadays.
    • PRF, PRP
    • Feistel network
    • 3-round Feistel network
  • AES, DES replacement
    • To counter the plain text attack
      • CBC model, using random Initialization Vector,
      • Random ctr mode
  • Example: DES, JCA(Java Cryptography Architecture) API
// sender: 
// 1. generate the key
secretKey = KeyGenerator.getInstance("DES").generateKey();
// 2. encrypt with mode
Cipher encryptor = Cipher.getInstance("DES/CBC/PKCS5Padding");
encryptor.init(Cipher.ENCRYPT_MODE, secretKey);
// 3. share the key
keyString = Base64.encodeToString(secretKey.getEncoded());
// send the Base64 encoded shared key, the receiver will be able
// to reconstruct the shared key and use it to decrypt

// receiver:
// 1. get key, encrypted data and other encrypt specific data
// 2. decrypt

Message Integrity

  • code or data wasn't be modified (secure boot or disk encryption)

Definition

  • Definition : MAC I = (S,V) over (Key,Message,Tag)
    • S(k, m) -> t, [m || t] is sent
    • V(k, [m, t]) -> OK?
  • Secure MAC Attacker can't produce valid tag for new message.
  • Important concept and properties
    1. Cryptographic checksum A MAC generates a cryptographically secure authentication tag for a given message.
    2. Symmetric MACs are based on secret symmetric keys. The signing and verifying parties must share a secret key.
    3. Arbitrary message size MACs accept messages of arbitrary length.
    4. Fixed output length MACs generate fixed-size authentication tags.
    5. Message integrity MACs provide message integrity: Any manipulations of a message during transit will be detected by the receiver.
    6. Message authentication The receiving party is assured of the origin of the message.
    7. No nonrepudiation Since MACs are based on symmetric principles, they do not provide nonrepudiation.
  • Secure PRF (Pseudorandom Function) -> Secure MAC
    • AES is a secure PRF so it is secure MAC but works only on 16 bytes
    • Given PRF for short message construct PRF for long message, or convert Small MAC to Big MAC, there are two way doing this
      • CBC-MAC - build it from block cypher
      • HMAC - build it from Hash function

CBC-MAC, NMAC, PMAC - all are PRF

Hash Collision Resistance

  • Let H: M -> T to be a hash function a collision is H(m0) = H(m1) and m0 != m1
  • Requirements/features:
    • preimage resistance, or One-wayness Can't get the message from the hash
    • second image resistance, or Collision Resistance: For a hash function H, the probability that attacker can find a m1 and m2 so that H(m1) = H(m2) is negligible.
  • MAC from collision Resistance Let I = (S,V) be MAC for short message over (K, M, T), e.g AES Let H : Mbig -> M, called Digest/hash algorithm, e.g md5/sha1/sha256/etc.. Define Ibig = (Sbig, Vbig) over (K, Mbig, T) as Sbig(k, m) = S(k, H(m)), Vbig(k, m, t) = V(k, H(m), t)
    If I is secure MAC and H is collision resistant, then Ibig is secure.
    Example: let H = SHA256, S = AES ; then Sbig(k,m) = AES(k, SHA256(m))
    Note: this is not HMAC!!
  • Properties
    • block size : Usually the hash function is blocked based, the hash is base on block, and the output of one block will feed into another stage as the input. see Merkle–Damgård construction. Remember, 512.
    • max message size : Remember, 2^64-1
    • digest size : the output of hash is fixed for certain algorithm (e.g 256 for SHA256).

Merlke-Damgard Paradigm

  • Given C.R function for short message, build C.R function for long message. given compression function h : T * X -> T get H: XBig->T Chain variables:
  H0 = h(IV, m0) 
Hi = h(Hi-1, mi)
  • Build compression function from block cypher. Davies-Meyer compression function h(H,m) = E(m, H) XOR H Use the message as the key, hash value (from previous output) as the value to be encrypted. The output is the same size as the input hash value.

SHA256

  • A Collision Resistance Hash building using
    • Merlke-Damgard Paradigm
    • Davies-Meyer compression function
    • Block cipher: SHACAL-2
    block size: 512 digest size: 256
  • Very useful!
    • Used by HMACSHA256 to build a Hash based MAC
    • Used to RSA
    • Used by EC

HMAC - a way of building secure MAC from Hash

  • The obvious but not secure way of building the MAC using hash H(k||x) or H(x||k)
  • HMAC - a secure way of building MAC from Hash H (k XOR opad || H( k XOR ipad) || m)
    k is expanded from the symmetric key, with 0 on the left, until the block size is reached, i.e 0000...key_s ipad is a repetition of 0011 0110, until the block size is reached opad is a repetition of 0101 1100, until the block size is reached
    e.g from HmacSHA256 the block size is 512.
    • HMAC is not acronym referring a generic way building hash from MAC but the specific way as described above. The difference is in what Hash function is used, e.g when the hash is SHA256 it is called HmacSHA256. similar, we have HmacSHA224,...
    • key size is defined the Hash function
    • tag size is defined by the digest size of the Hash function.
    • more parameters maybe required depend on the Hash function, e.g SHA256 requires an random IV, this IV will be needed in the verification side.
  • HmacSHA256
  • Despite Merlke-Damgard hash collision resistance and can work on long message but it is not safe to be used as a MAC directly. We need a PRF (e.g) and that is called Hash-MAC or HAMC.

Authenticated Encryption

  • MAC and then Encrypt (SSL)
  • Encrypt and then MAC (IPsec)
    • see android keystore client implementation
  • Encrypt and MAC (SSH) - Not safe
  • Example: TLS

TLS

  • Use mac and then encrypt
  • unidirectional key : both client and server has two keys. use symmetric key!
    • browser_to_sever_key_mac
    • browser_to_sever_key_enc
    • server_to_browser_key_mac
    • server_to_browser_key_enc
  • CBC-AES128, HMAC-SHA1
  • browser -> server
    • record == [ header || data || tag || pad ]
    • tag <- browser_to_sever_key_mac="" ctr_b_s="" data="" header="" li="" sign="">
    • pad [header||data||tag] to AES block
    • encrypt: CBC encrypt with browser_to_sever_key_enc and new random IV
    • prepend header
  • server -> browser
    • decrypt with browser_to_sever_key_enc
    • check pad
    • check tag on [ctr_b_s || header || data]

Key derivation

  • generate many keys from the the source key
  • if Source Key is uniformed distributed, use KDF
  • if not, extract and then expand
    • extract: making it indistinguishable from random distribution, using salt
    • expand : use KDF
  • HKDF : HMAC based KDF
    • extract: k <- hmac="" li="" salt="" source_key="">
  • PBKDF : Password Based KDF
    • Don't use HKDF to generate key from password, since password has low entropy
    • standard: PKCS#5(PBKDF1)
      • iterate c times using H(salt|pwd), slow down each time

Problem with Symmetric Encryption

  • Shared key exchange
  • Number of keys : A key is need between any 2 of N
  • No protection against cheating by Alice or Bob since that have same key

Usage/Purpose of Public-Key Algorithms:

It is important to realize that Public-Key Algorithms is not just for Encryption. Actually, since Public-Key Encryption is very slow, it is rarely used directly to encrypt large data, instead, it is often used to encrypt/exchange the symmetric key that will be used to encrypt the actual message.
  • Key Establishment e.g Diffie–Hellman key exchange (DHKE) or RSA key transport protocols.
  • Encryption Encrypt messages using as RSA or Elgamal.
  • Nonrepudiation Providing nonrepudiation and message integrity can be realized with digital signature algorithms, e.g., RSA, DSA or ECDSA.
  • Identification We can identify entities using challenge-and-response pro- tocols together with digital signatures.

3 Type of Public-Key Algorithm Families

  • Integer-Factorization Schemes RSA
  • Discrete Logarithm Schemes Diffie–Hellman key exchange, Elgamal encryption or the Digital Signature Algorithm (DSA).
  • Elliptic Curve (EC) Schemes Elliptic Curve Diffie–Hellman key exchange (ECDH) and the Elliptic Curve Digital Signature Algorithm (ECDSA). ECDSA used by bitcoin

Basic Key Exchange

Solutions

  • online Trust 3rd Party
  • Deffie-Hellman Protocol
  • Public-Key Encryption The key exchange problem triggered the start of public cryptography

online Trust 3rd Party

  • Toy protocol: When alice when to share a key with Bob, she ask TTP for a shared key. TTP will send alice [ enc(shared_key, alice_privateKey), enc(shared_key, bob_privateKey)]. The late message will be send to Bob by Alice. So, Alice and Bob set up a shared key
  • insecure to replay attack

Diffie-Hellman Key Exchange

Public-key encryption

Number Theory

  • gcd : gcd(x,y) = ax + by;
  • Modular invertible x is invertible in Zp, if there is a y in Zp making z*y = 1 in Zp Lemma: x is invertible if gcd(x,P) = 1
    (Zp)* represents all invertible in Zp, e.g (Z12)* = [1,3,5,7,11]
  • Fermat's theorem Let p be a prime , V x <- 1="" an="" be="" can="" if="" in="" is="" number="" p-1="" p="" prime.="" test="" to="" used="" x="" zp="">
  • Dlog
    • Elliptic curve group

Public-Key Encryption

  • Asymmetric, public key (G,E,D) G: generate key pair, (sk, pk) E: enc(plaintext, public key) -> cyphered text; D: dec(cyphered text, private key) -> plain text
  • Using TDF
  • Using Diffie-Hellman: ElGamal

Trapdoor Function (TDF), or one-way function

  • TDF: a triple algorithms that X->Y, (G,F,F')
    • G: random algorithm generate key pair(sk, pk)
    • F: F(pk, X) -> Y
    • F': F'(sk, Y) -> X
  • Secure TDF, F is one way function, that is to say without sk, the probability of invert X correctly is negligible.

Public-Key Encryption using Secure TDF

  • we need three components, Iso standard
    • (G, F, F') : A secure TDF
    • (E, D) : A symmetric authenticated encryption defined over (K,M, C)
    • H: X -> k : A Hash
  • For the public encryption, we need (G,E,D)
    • G: Use the same G in TDF
    • E(pk, m) -> c
    • D(sk, c) -> m
  • E(pk, m)
    • random pick x from X
    • y = F(pk,x)
    • k = H(x) , c = E(k, m)
    • output [y, c]
 | F(pk,x)    |        E(H(x), m)                      |
header body
  • D(sk, [y,c])
    • x = F'(sk, y) , k = H(x), m = D(k, c)
  • Don't use TDF directly to the message! It is not secure. Instead, the TDF is used to encode/trapdoor the source of symmetric key that will be used to encrypt the message.

RSA

RSA in Practice, PKCS

  • used in practice, different from what we described above
  • key difference:
  key   ->   preprocess   ->   RSA
128bits 2048bits
  • RSA : Java API
// sender
// 1. generate private and public key
rasKeyG = KeyPairGenerator.getInstance("RSA");
privateKey = rasKeyG.getPrivate(); // secure you private key
publicKey = rasKeyG.getPublic();

// 2.share the public key, dont share the public key directly but share
// some parameters( e.g the modulus and exponent for RSA), so that the
// receiver can to reconstruct the public key.
RSAPublicKeySpec rsaPublicKeySpec =
MM,CX, (RSAPublicKeySpec)keyFactory.getKeySpec(
publicKey,
Class.forName("java.security.spec.RSAPublicKeySpec"));
modulus = rsaPublicKeySpec.getModulus();
exponent =rsaPublicKeySpec.getPublicExponent();

// receiver
// 1. reconstruct the public key
// 2. generate the data and use the public key to encrypt the data
// 3. send the encrypted data and the other side will decrypt it using
// its private key

Public-Key Encryption using Diffie-Hallman

Public-Key Encryption Application

  • interactive
    • Used to exchange a symmetric key
  • non-interactive
    • Encrypted file system, and share the access enc. the message use enc. the sym. key with symmetric encryption public-key encryption | E(pk2, k) | <-- a="" b="" e="" enc.="" es="" k="" li="" m="" pk1="" pk="" s="" with="">

by Bin Chen (noreply@blogger.com) at September 09, 2017 06:11

August 25, 2017

Steve McIntyre

Let's BBQ again, like we did last summer!

It's that time again! Another year, another OMGWTFBBQ! We're expecting 50 or so Debian folks at our place in Cambridge this weekend, ready to natter, geek, socialise and generally have a good time. Let's hope the weather stays nice, but if not we have gazebo technology... :-)

Many thanks to a number of awesome companies and people near and far who are sponsoring the important refreshments for the weekend:

I've even been working on the garden this week to improve it ready for the event. If you'd like to come and haven't already told us, please add yourself to the wiki page!

August 08, 2017 02:00

August 19, 2017

Leif Lindholm

OpenPlatformPkg is dead, long live edk2-platforms!

For a few years now, I have been working towards improving the availability of open source platform ports and device drivers for EDK2.

Initially, this began by setting up OpenPlatformPkg. This has been used both for platforms from Linaro members and external parties, and has already led to some amount of reduced code duplication, and moving common functionality to EDK2.

Now, the platforms that were in OpenPlatformPkg have been moved into the master branch of edk2-platforms, and OpenPlatformPkg itself has become a read-only archive.

So ... what changes?

Well, the first and most obvious change is that the repository now lives in the TianoCore area on github: https://github.com/tianocore/edk2-platforms

Like OpenPlatformPkg, this is not part of the main EDK2 repository. Unlike OpenPlatformPkg, there is an official way to work with this repository as part of the TianoCore group of projects. Code contributions to this repository are reviewed on the edk2-devel mailing list.

Secondly, the directory structure changes slightly. I will let you discover the specifics for yourself.

Thirdly, edk2-platforms is being kept license clean and source only. So binary-only content from OpenPlatformPkg was moved to a separate edk2-non-osi repository. We still want to enable platforms that have a number of non-open-source components to be able to share part of their code, but edk2-platforms will contain only free software.

At the same time, we change the build behavior from having OpenPlatformPkg nested under edk2 to building with edk2, edk2-platforms and (if needed) edk2-non-osi located "wherever" and individual packages located using PACKAGES_PATH.

Updates to uefi-tools

As before, I am way too lazy to keep figuring out the build command lines for each platform/toolchain combination, so I added support to uefi-tools for the new structure as well. Rather than breaking the compatibility of uefi-build.sh with OpenPlaformPkg, or making it more complex by making it support both, I added a new script called edk2-build.sh (which uses a new default platform configuration file called edk2-platforms.config).

Usage-wise, the most visible change is that the script no longer needs to be executed inside the edk2 directory; any directory it is executed from becomes the WORKSPACE, and build output, including intermediary stages, will be placed underneath it.

Secondly, the addition of new command line parameters to point out the locations of the various repositories involved in a build:

-e <edk2 directory>
-p <edk2-platforms directory>
-n <edk2-non-osi directory>

Release management

Well, the old strategies that could be used with edk2/OpenPlatformPkg to achieve a coherent commit on a single hash (git subrepos or submodules) are no longer much use. In order to make a tagged release over multiple repositories, a tool such as mr or repo will be necessary.

I will have to figure out which I pick for the Linaro Enterprise 17.10 release, but I have several weeks left for that :)

by Leif Lindholm at August 08, 2017 23:00

August 05, 2017

Leif Lindholm

Another new blog...

Well, I guess it's that time again. Much as I liked blosxom, it's not really maintained anymore, and the plugin architecture is ... archaic ... to say the least. So last year I started looking into pelican and found it would simplify my life a bit ... and then I started trying to move my existing blosxom theme over to pelican, and then I got bored of that and dropped everything.

However, I do need to be posting some more, and pelican has very nice and simple tags and drafts handling, as well as a lot more useful metadata functionality, whilst remaining as no-frills as I like (markdown support is good enough).

So here is a migration of all of the old content to the new architecture. Hopefully, I will get around to sorting the theme at some point, but at least I am functional again.

by Leif Lindholm at August 08, 2017 21:55

August 03, 2017

Siddhesh Poyarekar

Tunables story continued - glibc 2.26

Those of you tuned in to the wonderful world of system programming may have noticed that glibc 2.26 was released last night (or daytime if you live west of me or middle of the night/dawn if you live east of me, well you get the drift) and it came out with a host of new improvements, including the much awaited thread cache for malloc. The thread cache for malloc is truly a great step forward - it brings down latency of a bulk of allocations from hundreds of cycles to tens of cycles. The other major improvement that a bulk of users and developers will notice is the fact that glibc now detects when resolv.conf has changed and reloads the lookup configuration. Yes, this was long overdue but hey, it’s not like we were refusing patches for the past half a decade, so thank the nice soul (Florian Weimer) who actually got it done in the end.

We are not here to talk about the improvements mentioned in the NEWS. We are here to talk about an improvement that will likely have a long term impact on how optimizations are implemented in libraries. We are here to talk about…

TUNABLES!

Yes, I’m back with tunables, but this time I am not the one who did the work, it’s the wonderful people from Cavium and Intel who have started using tunables for a use case I had alluded to in my talk at Linaro Connect BKK 2016 and also in my previous blog post on tunables, which was the ability to influence IFUNCs.

IFUNCs? International functions? Intricate Functions? Impossibly ridiculous Functions?

There is a short introduction of the GNU Indirect Functions on the glibc wiki that should help you get started on this very powerful yet very complicated concept. In short, ifuncs extend the GOT/PLT mechanism of loading functions from dynamic libraries to loading different implementations of the same function depending on some simple selection criteria. Traditionally this has been based on querying the CPU for features that it supports and as a result we have had multiple variants of some very common functions such as memcpy_sse2 and memcpy_ssse3 for x86 processors that get executed based on the support declared by the processor the program is running on.

Tunables allow you to take this idea further because there are two ways to get performance benefits, (1) by utilizing all of the CPU features that help and (2) by catering to the workload. For example, you could have a workload that performs better with a supposedly sub-optimal memcpy variant for the CPU purely because of the way your data is structured or laid out. Tunables allow you to select that routine by pretending that the CPU has a different set of capabilities than it actually reports, by setting the glibc.tune.hwcaps tunable on x86 processors. Not only that, you can even tune cache sizes and non-temporal thresholds (i.e. threshold beyond which some routines use non-temporal instructions for loads and stores to optimize cache usage) to suit your workload. I won’t be surprised if some years down the line we see specialized implementations of these routines that cater to specific workloads, like memcpy_db for databases or memset_paranoid for a time invariant (or mostly invariant) implementation of memset.

Beyond x86

Here’s where another very important feature landed in glibc 2.26: multiarch support in aarch64. The ARMv8 spec is pretty standard and as a result the high level instruction set and feature set of vendor chips is pretty much the same with some minor trivial differences. However, even though the spec is standard, the underlying microarchitecture implementation could be very different and that meant that selection of instructions and scheduling differences could lead to sometimes very significant differences in performance and vendors obviously would like to take advantage of that.

The only way they could reliably (well, kind of, there should be a whole blog post for this) identify their processor variant (and hence deploy routines for their processors) was by reading the machine identification register or MIDR_EL1. If you’re familiar with aarch64 registers, you’ll notice that this register cannot be read by userspace, it can only be read by the kernel. The kernel thus had to trap and emulate this instruction, support for which is now available since Linux 4.11. In glibc 2.26, we now use MIDR_EL1 to identify which vendor processor the program is running on and deploy an optimal routine (in this case for the Cavium thunderxt88) for the processor.

But wait, what about earlier kernels, how do they take advantage of this? There’s a tunable for it! There’s glibc.tune.cpu for aarch64 that allows you to select the CPU variant you want to emulate. For some workloads you’ll find the generic memcpy actually works better and the tunable allows you to select that as well.

Finally due to tunables, the much needed cleanup of LD_HWCAP_MASK happened, giving rise to the tunable glibc.tune.hwcap_mask. Tunables also eliminated a lot of the inconsistency in environment variable behaviour due to the way static and dynamic executables are initialized, so you’ll see much less differences in the way your applications behave when they’re built dynamically vs when they’re built statically.

Wow, that sounds good, where do I sign up for your newsletter?

The full list of hardware capability tunables are documented in the glibc manual so take a look and feel free to hop on to the libc-help mailing list to discuss these tunables and suggest more ways in which you would like to tune the library for your workload. Remember that tunables don’t have any ABI/API guarantees for now, so they can be added or removed between releases as we deem fit. Also, your distribution may end up adding their own tunables too in future, so look out for those as well. Finally, system level tunables coming up real soon to allow system administrators to control how users use these tunables.

Happy hacking!

by Siddhesh at August 08, 2017 06:57

July 25, 2017

Rémi Duraffort

Using requests with xmlrpc

Using XML-RPC with Python3 is really simple. Calling system.version on http://localhost/RCP2 is as simple as:

import xmlrpc.client

proxy = xmlrpc.client.ServerProxy("http://localhost/RPC2")
print(proxy.system.version())

However, the default client is missing many features, like handling proxies. Using requests for the underlying connection allows for greater control of the http request.

The xmlrpc client allows to change the underlying transport class by a custom class. In order to use requests, we create a simple Transport class:

import requests
import xmlrpc.client

class RequestsTransport(xmlrpc.client.Transport):

    def request(self, host, handler, data, verbose=False):
        # set the headers, including the user-agent
        headers = {"User-Agent": "my-user-agent",
                   "Content-Type": "text/xml",
                   "Accept-Encoding": "gzip"}
        url = "https://%s%s" % (host, handler)
        try:
            response = None
            response = requests.post(url, data=data, headers=headers)
            response.raise_for_status()
            return self.parse_response(response)
        except requests.RequestException as e:
            if response is None:
                raise xmlrpc.client.ProtocolError(url, 500, str(e), "")
            else:
                raise xmlrpc.client.ProtocolError(url, response.status_code,
                                                  str(e), response.headers)

    def parse_response(self, resp):
        """
        Parse the xmlrpc response.
        """
        p, u = self.getparser()
        p.feed(resp.text)
        p.close()
        return u.close()

To use this Transport class, we should use:

proxy = xmlrpc.client.ServerProxy(uri, transport=RequestsTransport())

We can now use requests to:

  • use proxies
  • skip ssl verification (on a development server) or adding the right certificate chain
  • set the headers
  • set the timeouts
  • ...

See the documentation or an example for more information.

by Rémi Duraffort at July 07, 2017 08:33

July 24, 2017

Peter Maydell

Installing Debian on QEMU’s 64-bit ARM “virt” board

This post is a 64-bit companion to an earlier post of mine where I described how to get Debian running on QEMU emulating a 32-bit ARM “virt” board. Thanks to commenter snak3xe for reminding me that I’d said I’d write this up…

Why the “virt” board?

For 64-bit ARM QEMU emulates many fewer boards, so “virt” is almost the only choice, unless you specifically know that you want to emulate one of the 64-bit Xilinx boards. “virt” supports supports PCI, virtio, a recent ARM CPU and large amounts of RAM. The only thing it doesn’t have out of the box is graphics.

Prerequisites and assumptions

I’m going to assume you have a Linux host, and a recent version of QEMU (at least QEMU 2.8). I also use libguestfs to extract files from a QEMU disk image, but you could use a different tool for that step if you prefer.

I’m going to document how to set up a guest which directly boots the kernel. It should also be possible to have QEMU boot a UEFI image which then boots the kernel from a disk image, but that’s not something I’ve looked into doing myself. (There may be tutorials elsewhere on the web.)

Getting the installer files

I suggest creating a subdirectory for these and the other files we’re going to create.

wget -O installer-linux http://http.us.debian.org/debian/dists/stretch/main/installer-arm64/current/images/netboot/debian-installer/arm64/linux
wget -O installer-initrd.gz http://http.us.debian.org/debian/dists/stretch/main/installer-arm64/current/images/netboot/debian-installer/arm64/initrd.gz

Saving them locally as installer-linux and installer-initrd.gz means they won’t be confused with the final kernel and initrd that the installation process produces.

(If we were installing on real hardware we would also need a “device tree” file to tell the kernel the details of the exact hardware it’s running on. QEMU’s “virt” board automatically creates a device tree internally and passes it to the kernel, so we don’t need to provide one.)

Installing

First we need to create an empty disk drive to install onto. I picked a 5GB disk but you can make it larger if you like.

qemu-img create -f qcow2 hda.qcow2 5G

(Oops — an earlier version of this blogpost created a “qcow” format image, which will work but is less efficient. If you created a qcow image by mistake, you can convert it to qcow2 with mv hda.qcow2 old-hda.qcow && qemu-img convert -O qcow2 old-hda.qcow hda.qcow2. Don’t try it while the VM is running! You then need to update your QEMU command line to say “format=qcow2” rather than “format=qcow”. You can delete the old-hda.qcow once you’ve checked that the new qcow2 file works.)

Now we can run the installer:

qemu-system-aarch64 -M virt -m 1024 -cpu cortex-a53 \
  -kernel installer-linux \
  -initrd installer-initrd.gz \
  -drive if=none,file=hda.qcow2,format=qcow2,id=hd \
  -device virtio-blk-pci,drive=hd \
  -netdev user,id=mynet \
  -device virtio-net-pci,netdev=mynet \
  -nographic -no-reboot

The installer will display its messages on the text console (via an emulated serial port). Follow its instructions to install Debian to the virtual disk; it’s straightforward, but if you have any difficulty the Debian installation guide may help.

The actual install process will take a few hours as it downloads packages over the network and writes them to disk. It will occasionally stop to ask you questions.

Late in the process, the installer will print the following warning dialog:

   +-----------------| [!] Continue without boot loader |------------------+
   |                                                                       |
   |                       No boot loader installed                        |
   | No boot loader has been installed, either because you chose not to or |
   | because your specific architecture doesn't support a boot loader yet. |
   |                                                                       |
   | You will need to boot manually with the /vmlinuz kernel on partition  |
   | /dev/vda1 and root=/dev/vda2 passed as a kernel argument.             |
   |                                                                       |
   |                              <Continue>                               |
   |                                                                       |
   +-----------------------------------------------------------------------+  

Press continue for now, and we’ll sort this out later.

Eventually the installer will finish by rebooting — this should cause QEMU to exit (since we used the -no-reboot option).

At this point you might like to make a copy of the hard disk image file, to save the tedium of repeating the install later.

Extracting the kernel

The installer warned us that it didn’t know how to arrange to automatically boot the right kernel, so we need to do it manually. For QEMU that means we need to extract the kernel the installer put into the disk image so that we can pass it to QEMU on the command line.

There are various tools you can use for this, but I’m going to recommend libguestfs, because it’s the simplest to use. To check that it works, let’s look at the partitions in our virtual disk image:

$ virt-filesystems -a hda.qcow2 
/dev/sda1
/dev/sda2

If this doesn’t work, then you should sort that out first. A couple of common reasons I’ve seen:

  • if you’re on Ubuntu then your kernels in /boot are installed not-world-readable; you can fix this with sudo chmod 644 /boot/vmlinuz*
  • if you’re running Virtualbox on the same host it will interfere with libguestfs’s attempt to run KVM; you can fix that by exiting Virtualbox

Looking at what’s in our disk we can see the kernel and initrd in /boot:

$ virt-ls -a hda.qcow2 /boot/
System.map-4.9.0-3-arm64
config-4.9.0-3-arm64
initrd.img
initrd.img-4.9.0-3-arm64
initrd.img.old
lost+found
vmlinuz
vmlinuz-4.9.0-3-arm64
vmlinuz.old

and we can copy them out to the host filesystem:

virt-copy-out -a hda.qcow2 /boot/vmlinuz-4.9.0-3-arm64 /boot/initrd.img-4.9.0-3-arm64 .

(We want the longer filenames, because vmlinuz and initrd.img are just symlinks and virt-copy-out won’t copy them.)

An important warning about libguestfs, or any other tools for accessing disk images from the host system: do not try to use them while QEMU is running, or you will get disk corruption when both the guest OS inside QEMU and libguestfs try to update the same image.

If you subsequently upgrade the kernel inside the guest, you’ll need to repeat this step to extract the new kernel and initrd, and then update your QEMU command line appropriately.

Running

To run the installed system we need a different command line which boots the installed kernel and initrd, and passes the kernel the command line arguments the installer told us we’d need:

qemu-system-aarch64 -M virt -m 1024 -cpu cortex-a53 \
  -kernel vmlinuz-4.9.0-3-arm64 \
  -initrd initrd.img-4.9.0-3-arm64 \
  -append 'root=/dev/vda2' \
  -drive if=none,file=hda.qcow2,format=qcow2,id=hd \
  -device virtio-blk-pci,drive=hd \
  -netdev user,id=mynet \
  -device virtio-net-pci,netdev=mynet \
  -nographic

This should boot to a login prompt, where you can log in with the user and password you set up during the install.

The installation has an SSH client, so one easy way to get files in and out is to use “scp” from inside the VM to talk to an SSH server outside it. Or you can use libguestfs to write files directly into the disk image (for instance using virt-copy-in) — but make sure you only use libguestfs when the VM is not running, or you will get disk corruption.

by pm215 at July 07, 2017 09:25

July 21, 2017

Gema Gomez

Acer Shawl

Last weekend I attended a class at The Sheep Shop. It was the Easy crochet lace class by Joanne Scrace. Just for attending the class, we got a copy of the Acer Shawl pattern by Joanne. It was easy to get into the rythm of it and well explained. This is the sample I managed to do during the three hours of the class:

Class sample

I have continued working on it this week, and I managed to finish two skeins of 50g each of Louisa Harding Yarn, Amitola, color Tinkerbell (134). I have bought a third skein to make it slightly bigger, but it is looking lovely:

Shawl

Crochet hook used for this: 5.0mm.

This was the first time I work with a colour changing yarn on a project like this. I have been rather careful when changing skeins to match the tones of both ends of the yarn and the trick worked wonders for a very neat finish.

Thank you Joanne for such a lovely and simple pattern!

by Gema Gomez at July 07, 2017 23:00

July 15, 2017

Bin Chen

Booting Andriod with u-boot

Booting Andriod with u-boot

u-boot is an open source bootloader that you will find in lots of embedded devices, including Android, and that’s what we are going to talk about today - boot up Android with u-boot.

Andriod Boot Image

Andriod boot image usually contains the kernel image and rootfs, and sometime dtb, you can either conconact the dtb to the kernel image or put it into the 2ndloader section. We’ll explain in more detail later.
|             | Description
| -------- | ------------------------------------------------|
| Header | kernel cmdline, base/offset for kernel/ramdisk |
| kernel | kernel, may include dtb |
| ramdisk | roofs |
| 2ndloader | 2nd bootloader |
The following command (simplified for sake for simplicty) is what is used to create an Android boot image, using make bootimage.
mkbootimg --kernel zImage --ramdisk ramdisk.img.gz --cmdline 'xxxx' -o --boot.img
We’ll ignore the details regarding how the rootfs (ramdisk.img.gz) is created, which is indeed very intresting.

bootm overview

Bootm is an u-boot command that is used to booting a system from the memory, as the suffix mindicated. The full form of this command takes three parameters:
# bootm kernel ramfs dtb. 
The first parameter is the kernel address, the second one is the ram rootfs and the last one is the dtb. Only the first parameters is mandatory, and we call it boot image address generally without limiting it being kernel only.
u-boot support several boot image formats, such as uImage, which is u-boot defined image format. It is also called legacy image format but worth noting that uImage is not limited to be boot image. Instead, it is a generic container image format that can be recognized by u-boot. For example, you can package a raw ramdisk into the uImage format as well.
The other boot image format supported is Android bootimage format as we discussed above. Since Android bootimage contains the rootfs so there is no need to specify the second parameters but instead the uboot will extract the ramdisk out of the boot image and set up it correctly. For the dtb, there are approaches of concatenating it to the kernel but it requires the kernel’s awareness and ability to pull out of dtb. But not all the kernels are able to do that. For example, it is supported by aarch32 but not aarch64.
Nowadays, people are encouraged to use dtb and pass that explicitly to the kernel when booting. It is a good practice to have a dedicated partition for the dtb, so that it can be upgraded independently. But to retrieve the dtb (so as to pass to the bootm), it depends on the partition scheme you are using. Ideally, what we need is find_partition_by_name function (we have it in part.c::part_get_info_by_name). It is a breeze if gpt is used but in the case of mbrand/orerb there will be some hair scratch. As a comprise or solution, I proposed to use the underused section of Android boot image to hold the dtb, so that the Andriod boot image become a self-contained and self-sufficient boot image. And that is the way we use it for Poplar 96board.

bootm implementation

Down to essence, here are steps that are performed in the bootm.
  1. Find the kernel/os
  2. Find rootfs and dtb
  3. Relocate/decompress the kernel, if needed
  4. Relocate the rootfs if needed
  5. setup fdt or atag
  6. jump to the kernel
In the bootm implementations, those steps are called state and the entry function is do_bootm_states. You active specific step by passing corresponding enum values. say for bootm command, it is simply calling
 return do_bootm_states(
BOOTM_STATE_START |
BOOTM_STATE_FINDOS |
BOOTM_STATE_FINDOTHER |
BOOTM_STATE_LOADOS |
BOOTM_STATE_RAMDISK |
BOOTM_STATE_OS_PREP |
BOOTM_STATE_OS_GO,
&images, 1);
We’ll go over those states and see what will be done in each steps:
  1. BOOTM_STATE_START
It will set up the lmb. Isn’t particular interesting.
  1. BOOTM_STATE_FINDOS -> bootm_find_os -> boot_get_kernel
boot_get_kernel will set up images.os.image_start and images.os.image_len, which are the kernel location in the ram and its length.
Some other important fields and concept: os.load, ep, os.start
images.os.load = android_image_get_kload(os_hdr);
images.ep = images.os.load;
images.os.load is used as the relocation destination when it is of different value with os.image_start. And kernel relocation usually happens. Relocations happen in the BOOTM_STATE_LOADOS stage we’ll see later.
images.ep is an alias for os.load. They should be of same value and is used to as the kernel entry point after the relocation is done.
  kernel_entry = (void (*)(void *fdt_addr, void *res0, void *res1,
void *res2))images->ep;
images.os.start, for Android, it is the start of whole boot image (not the kernel inside of the boot image as os.image_start pointed to)
There are other fields not related to the addresses but more for information so that can be handled differently if required.
images.os.type = IH_TYPE_KERNEL;
images.os.comp = IH_COMP_NONE;
images.os.os = IH_OS_LINUX;
  1. BOOTM_STATE_FINDOTHER -> bootm_find_others -> bootm_find_images
Find ramdisk memory address, setting up images.rd_start and images.rd_end.
Find dtb memory address, setting up images.ft_addr and images.ft_len
  1. BOOTM_STATE_LOADOS
copy the image, decompress if needed, from ram address (image.os.image_start) to its load address (image.load) and reserve that area from lmb.
iflag = bootm_disable_interrupts();
ret = bootm_load_os(images, &load_end, 0);
if (ret == 0)
lmb_reserve(&images->lmb, images->os.load,
(load_end - images->os.load));
Will this update the image.ep as well? NO. ep is fixed when os.load is fixed.
  1. BOOTM_STATE_RAMDISK
setup the ramdisk relocation address, i.e images.initrd_start and images.initrd_end.
initrd_start is the final ramdisk load address (as os.load is for the kernel). If initrd_start is different from rd_start, ramdisk relocation will happen. The source ramdisk address is determined in the bootmcommand line (directly or indirectly) and setup in the BOOTM_STATE_FINDOTHER stage mentioned above; the destination ramdisk address can be controlled by several factors, including compile options (CONFIG_SYS_BOOT_RAMDISK_HIGH), environment variables (initrd_high) and some field in the boot image, such as the ramdisk load address in the Android bootimage. And I wholehearted agree with you it is a huge headache for a beginner (like me) to sort this out.
 * boot_ramdisk_high() takes a relocation hint from "initrd_high" environment
* variable and if requested ramdisk data is moved to a specified location.
int boot_ramdisk_high(struct lmb *lmb, ulong rd_data, ulong rd_len,
ulong *initrd_start, // those are out
ulong *initrd_end) // out
  1. BOOTM_STATE_OS_PREP
Setup the kernel parameters using either atag or fdt, and the later take precedence over the former one. When fdt is used, it will fix up the kernel command line, ramdisk address if it is used, by amending the dtb.
int fdt_initrd(void *fdt, ulong initrd_start, ulong initrd_end) {
err = fdt_setprop_uxx(fdt, nodeoffset, "linux,initrd-start",
(uint64_t)initrd_start, is_u64);
}
}
  1. BOOTM_STATE_OS_GO
Linux go!

by Bin Chen (noreply@blogger.com) at July 07, 2017 10:37

July 05, 2017

Ard Biesheuvel

GHASH for low-end ARM cores

The Galois hash algorithm (GHASH) is a fairly straight-forward keyed hash algorithm based on finite field multiplication, using the field GF(2128) with characteristic polynomial x128 + x7 + x2 + x + 1. (An excellent treatment of Galois fields can be found here)

The significance of GHASH is that it is used as the authentication component in the GCM algorithm, which is an implementation of authenticated encryption with associated data (AEAD), a cryptographic mode that combines authentication of data sent in the clear with authentication of data that is sent in encrypted form at the same time. It is widely used these days, primarily in the networking domain (IPsec, IEEE 802.11)

ISA support

Both the Intel and ARMv8 instruction sets now contain support for carry-less multiplication (also known as polynomial multiplication), primarily to allow for accelerated implementations of GHASH to be created, which formerly had to rely on unwieldy and less secure table based implementations. (The Linux implementation pre-computes a 4 KB lookup table for each instance of the hash algorithm that is in use, i.e., for each session having a different key. 4 KB per IPsec connection does not sound too bad in terms of memory usage, but the D-cache footprint may become a bottleneck when serving lots of concurrent connections.) In contrast, implementations based on these special instructions are time invariant, and are significantly faster (around 16x on high end ARMv8 cores).

Unfortunately, though, while ARMv8 specifies a range of polynomial multiplication instructions with various operand sizes, the one we are most interested in, which performs carry-less multiplication on two 64-bit operands to produce a 128-bit result, is optional in the architecture. So on low-end cores such as the Cortex-A53 (as can be found in the Raspberry Pi 3), the accelerated driver is not available because this particular instruction is not implemented.

Using vmull.p8 to implement vmull.p64

The other day, I stumbled upon the paper Fast Software Polynomial Multiplication on ARM Processors Using the NEON Engine by Danilo Camara, Conrado Gouvea, Julio Lopez and Ricardo Dahab, which describes how 64×64 to 128 bit polynomial multiplication (vmull.p64) can be composed using 8×8 to 16 bit polynomial multiplication (vmull.p8) combined with other SIMD arithmetic instructions. The nice thing about vmull.p8 is that it is a standard NEON instruction, which means all NEON capable CPUs implement it, including the Cortex-A53 on the Raspberry Pi 3.

Transliterating 32-bit ARM code to the 64-bit ISA

The algorithm as described in the paper is based on the 32-bit instruction set (retroactively named AArch32), which deviates significantly from the new 64-bit ISA called AArch64. The primary difference is that the number of SIMD registers has increased to 32, which is nice, but which has a downside as well: it is no longer possible to directly use the top half of a 128-bit register as a 64-bit register, which is something the polynomial multiplication algorithm relies on.

The original code looks something like this (note the use of ‘high’ and ‘low’ registers in the same instruction)

.macro          vmull_p64, rq, ad, bd
vext.8          t0l, \ad, \ad, #1       @ A1
vmull.p8        t0q, t0l, \bd           @ F = A1*B
vext.8          \rq\()_L, \bd, \bd, #1  @ B1
vmull.p8        \rq, \ad, \rq\()_L      @ E = A*B1
vext.8          t1l, \ad, \ad, #2       @ A2
vmull.p8        t1q, t1l, \bd           @ H = A2*B
vext.8          t3l, \bd, \bd, #2       @ B2
vmull.p8        t3q, \ad, t3l           @ G = A*B2
vext.8          t2l, \ad, \ad, #3       @ A3
vmull.p8        t2q, t2l, \bd           @ J = A3*B
veor            t0q, t0q, \rq           @ L = E + F
vext.8          \rq\()_L, \bd, \bd, #3  @ B3
vmull.p8        \rq, \ad, \rq\()_L      @ I = A*B3
veor            t1q, t1q, t3q           @ M = G + H
vext.8          t3l, \bd, \bd, #4       @ B4
vmull.p8        t3q, \ad, t3l           @ K = A*B4
veor            t0l, t0l, t0h           @ t0 = (L) (P0 + P1) << 8
vand            t0h, t0h, k48
veor            t1l, t1l, t1h           @ t1 = (M) (P2 + P3) << 16
vand            t1h, t1h, k32
veor            t2q, t2q, \rq           @ N = I + J
veor            t0l, t0l, t0h
veor            t1l, t1l, t1h
veor            t2l, t2l, t2h           @ t2 = (N) (P4 + P5) << 24
vand            t2h, t2h, k16
veor            t3l, t3l, t3h           @ t3 = (K) (P6 + P7) << 32
vmov.i64        t3h, #0
vext.8          t0q, t0q, t0q, #15
veor            t2l, t2l, t2h
vext.8          t1q, t1q, t1q, #14
vmull.p8        \rq, \ad, \bd           @ D = A*B
vext.8          t2q, t2q, t2q, #13
vext.8          t3q, t3q, t3q, #12
veor            t0q, t0q, t1q
veor            t2q, t2q, t3q
veor            \rq, \rq, t0q
veor            \rq, \rq, t2q
.endm

However, things like veor t1l, t1l, t1h or using ext with upper halves of registers are not possible in AArch64, and so we need to transpose the contents of some of registers using the tbl and/or zip/unzip instructions. Also, the vmull.p8 instruction now exists in two variants: pmull operating on the lower halves and pmull2 operating on the upper halves of the input operands.

We end up with the following sequence, which is 3 instructions longer than the original:

.macro          __pmull_p8, rq, ad, bd, i
.ifb            \i
ext             t4.8b, \ad\().8b, \ad\().8b, #1         // A1
ext             t8.8b, \bd\().8b, \bd\().8b, #1         // B1
ext             t5.8b, \ad\().8b, \ad\().8b, #2         // A2
ext             t7.8b, \bd\().8b, \bd\().8b, #2         // B2
ext             t6.8b, \ad\().8b, \ad\().8b, #3         // A3
ext             t9.8b, \bd\().8b, \bd\().8b, #3         // B3
ext             t3.8b, \bd\().8b, \bd\().8b, #4         // B4

pmull           t4.8h, t4.8b, \bd\().8b                 // F = A1*B
pmull           t8.8h, \ad\().8b, t8.8b                 // E = A*B1
pmull           t5.8h, t5.8b, \bd\().8b                 // H = A2*B
pmull           t7.8h, \ad\().8b, t7.8b                 // G = A*B2
pmull           t6.8h, t6.8b, \bd\().8b                 // J = A3*B
pmull           t9.8h, \ad\().8b, t9.8b                 // I = A*B3
pmull           t3.8h, \ad\().8b, t3.8b                 // K = A*B4
pmull           \rq\().8h, \ad\().8b, \bd\().8b         // D = A*B
.else
tbl             t4.16b, {\ad\().16b}, perm1.16b         // A1
tbl             t8.16b, {\bd\().16b}, perm1.16b         // B1
tbl             t5.16b, {\ad\().16b}, perm2.16b         // A2
tbl             t7.16b, {\bd\().16b}, perm2.16b         // B2
tbl             t6.16b, {\ad\().16b}, perm3.16b         // A3
tbl             t9.16b, {\bd\().16b}, perm3.16b         // B3
tbl             t3.16b, {\bd\().16b}, perm4.16b         // B4

pmull2          t4.8h, t4.16b, \bd\().16b               // F = A1*B
pmull2          t8.8h, \ad\().16b, t8.16b               // E = A*B1
pmull2          t5.8h, t5.16b, \bd\().16b               // H = A2*B
pmull2          t7.8h, \ad\().16b, t7.16b               // G = A*B2
pmull2          t6.8h, t6.16b, \bd\().16b               // J = A3*B
pmull2          t9.8h, \ad\().16b, t9.16b               // I = A*B3
pmull2          t3.8h, \ad\().16b, t3.16b               // K = A*B4
pmull2          \rq\().8h, \ad\().16b, \bd\().16b       // D = A*B
.endif

eor             t4.16b, t4.16b, t8.16b                  // L = E + F
eor             t5.16b, t5.16b, t7.16b                  // M = G + H
eor             t6.16b, t6.16b, t9.16b                  // N = I + J

uzp1            t8.2d, t4.2d, t5.2d
uzp2            t4.2d, t4.2d, t5.2d
uzp1            t7.2d, t6.2d, t3.2d
uzp2            t6.2d, t6.2d, t3.2d

// t4 = (L) (P0 + P1) << 8
// t5 = (M) (P2 + P3) << 16
eor             t8.16b, t8.16b, t4.16b
and             t4.16b, t4.16b, k32_48.16b

// t6 = (N) (P4 + P5) << 24
// t7 = (K) (P6 + P7) << 32
eor             t7.16b, t7.16b, t6.16b
and             t6.16b, t6.16b, k00_16.16b

eor             t8.16b, t8.16b, t4.16b
eor             t7.16b, t7.16b, t6.16b

zip2            t5.2d, t8.2d, t4.2d
zip1            t4.2d, t8.2d, t4.2d
zip2            t3.2d, t7.2d, t6.2d
zip1            t6.2d, t7.2d, t6.2d

ext             t4.16b, t4.16b, t4.16b, #15
ext             t5.16b, t5.16b, t5.16b, #14
ext             t6.16b, t6.16b, t6.16b, #13
ext             t3.16b, t3.16b, t3.16b, #12

eor             t4.16b, t4.16b, t5.16b
eor             t6.16b, t6.16b, t3.16b
eor             \rq\().16b, \rq\().16b, t4.16b
eor             \rq\().16b, \rq\().16b, t6.16b
.endm

On the Raspberry Pi 3, this code runs 2.8x faster than the generic, table based C code. This is a nice improvement, but we can do even better.

GHASH reduction

The accelerated GHASH implementation that uses the vmull.p64 instruction looks like this:

ext		T2.16b, XL.16b, XL.16b, #8
ext		IN1.16b, T1.16b, T1.16b, #8
eor		T1.16b, T1.16b, T2.16b
eor		XL.16b, XL.16b, IN1.16b

pmull2		XH.1q, XL.2d, SHASH.2d		// a1 * b1
eor		T1.16b, T1.16b, XL.16b
pmull	 	XL.1q, XL.1d, SHASH.1d		// a0 * b0
pmull		XM.1q, T1.1d, SHASH2.1d		// (a1 + a0)(b1 + b0)

eor		T2.16b, XL.16b, XH.16b
ext		T1.16b, XL.16b, XH.16b, #8
eor		XM.16b, XM.16b, T2.16b

pmull		T2.1q, XL.1d, MASK.1d
eor		XM.16b, XM.16b, T1.16b

mov		XH.d[0], XM.d[1]
mov		XM.d[1], XL.d[0]

eor		XL.16b, XM.16b, T2.16b
ext		T2.16b, XL.16b, XL.16b, #8
pmull		XL.1q, XL.1d, MASK.1d

eor		T2.16b, T2.16b, XH.16b
eor		XL.16b, XL.16b, T2.16b

What should be noted here is that the finite field multiplication consists of a multiplication step and a reduction step, where the latter essentially performs the modulo division involving the characteristic polynomial (which is how we normalize the result, i.e., ensure that it remains inside the field)

So while this sequence is optimal for cores that implement vmull.p64 natively, we can switch to a reduction step that does not involve polynomial multiplication at all, and avoid two copies of the fallback vmull.p64 sequence consisting of 40 instructions each.

ext		T2.16b, XL.16b, XL.16b, #8
ext		IN1.16b, T1.16b, T1.16b, #8
eor		T1.16b, T1.16b, T2.16b
eor		XL.16b, XL.16b, IN1.16b

__pmull_p8	XH, XL, SHASH, 2		// a1 * b1
eor		T1.16b, T1.16b, XL.16b
__pmull_p8 	XL, XL, SHASH			// a0 * b0
__pmull_p8	XM, T1, SHASH2			// (a1 + a0)(b1 + b0)

eor		T2.16b, XL.16b, XH.16b
ext		T1.16b, XL.16b, XH.16b, #8
eor		XM.16b, XM.16b, T2.16b

eor		XM.16b, XM.16b, T1.16b

mov		XL.d[1], XM.d[0]
mov		XH.d[0], XM.d[1]

shl		T1.2d, XL.2d, #57
shl		T2.2d, XL.2d, #62
eor		T2.16b, T2.16b, T1.16b
shl		T1.2d, XL.2d, #63
eor		T2.16b, T2.16b, T1.16b
ext		T1.16b, XL.16b, XH.16b, #8
eor		T2.16b, T2.16b, T1.16b

mov		XL.d[1], T2.d[0]
mov		XH.d[0], T2.d[1]

ushr		T2.2d, XL.2d, #1
eor		XH.16b, XH.16b, XL.16b
eor		XL.16b, XL.16b, T2.16b
ushr		T2.2d, T2.2d, #6
ushr		XL.2d, XL.2d, #1

eor		T2.16b, T2.16b, XH.16b
eor		XL.16b, XL.16b, T2.16b

Loop invariants

Another observation one can make when looking at this code is that the vmull.p64 calls that remain all involve right hand sides that are invariants during the execution of the loop. For the version that uses the native vmull.p64, this does not matter much, but for our fallback sequence, it means that some instructions essentially calculate the same value each time, and the computation can be taken out of the loop instead.

Since we have plenty of spare registers on AArch64, we can dedicate 8 of them to prerotated B1/B2/B3/B4 values of SHASH and SHASH2. With that optimization folded in as well, this implementation runs at 4x the speed of the generic GHASH driver. When combined with the bit-sliced AES driver, GCM performance on the Cortex-A53 increases twofold, from 58 to 29 cycles per byte.

The patches implementing this for AArch64 and for AArch32 can be found here.

by ardbiesheuvel at July 07, 2017 12:12

June 30, 2017

Gema Gomez

Stitching group

A couple of months ago we started a Stitch ‘n B*tch group at work. We meet every week on Thursdays at lunchtime in a meeting room for those of us in the office and via online conference for the rest.

We work in technology and most of our workforce is remote, so we decided to make the group inclusive and invite not only people that may be interested in the office, but also colleagues working from home. So far there is four of us regularly attending this once a week meetup at lunchtime and we are having lots of fun sharing stories from our respective areas of the company. We are all from different departments and if it weren’t due to this hobby we all share, we may have never gotten to know each other that much.

I cannot encourage crafters out there enough to get organised and do something other than sitting in front of the computer during the lunch hour. Stitching or walking are fun activities and help socialize with your colleagues, plus they are fun. And it makes us so much more productive afterwards!

We are currently planning to attend fibre-east.co.uk at the end of the month together :-)

by Gema Gomez at June 06, 2017 23:00

June 23, 2017

Riku Voipio

Cross-compiling with debian stretch

Debian stretch comes with cross-compiler packages for selected architectures:
 $ apt-cache search cross-build-essential
crossbuild-essential-arm64 - Informational list of cross-build-essential packages for
crossbuild-essential-armel - ...
crossbuild-essential-armhf - ...
crossbuild-essential-mipsel - ...
crossbuild-essential-powerpc - ...
crossbuild-essential-ppc64el - ...

Lets have a quick exact steps guide. But first - while you can use do all this in your desktop PC rootfs, it is more wise to contain yourself. Fortunately, Debian comes with a container tool out of box:

sudo debootstrap stretch /var/lib/container/stretch http://deb.debian.org/debian
echo "strech_cross" | sudo tee /var/lib/container/stretch/etc/debian_chroot
sudo systemd-nspawn -D /var/lib/container/stretch
Then we set up cross-building enviroment for arm64 inside the container:

# Tell dpkg we can install arm64
dpkg --add-architecture arm64
# Add src line to make "apt-get source" work
echo "deb-src http://deb.debian.org/debian stretch main" >> /etc/apt/sources.list
apt-get update
# Install cross-compiler and other essential build tools
apt install --no-install-recommends build-essential crossbuild-essential-arm64
Now we have a nice build enviroment, lets choose something more complicated than the usual kernel/BusyBox to cross-build, qemu:

# Get qemu sources from debian
apt-get source qemu
cd qemu-*
# New in stretch: build-dep works in unpacked source tree
apt-get build-dep -a arm64 .
# Cross-build Qemu for arm64
dpkg-buildpackage -aarm64 -j6 -b
Now that works perfectly for Qemu. For other packages, challenges may appear. For example you may have to se "nocheck" flag to skip build-time unit tests. Or some of the build-dependencies may not be multiarch-enabled. So work continues :)

by Riku Voipio (noreply@blogger.com) at June 06, 2017 13:36

June 22, 2017

Steve McIntyre

-1, Trolling

Here's a nice comment I received by email this morning. I guess somebody was upset by my last post?

From: Tec Services <tecservices911@gmail.com>
Date: Wed, 21 Jun 2017 22:30:26 -0700
To: steve@einval.com
Subject: its time for you to retire from debian...unbelievable..your
         the quality guy and fucked up the installer!

i cant ever remember in the hostory of computing someone releasing an installer
that does not work!!

wtf!!!

you need to be retired...due to being retarded..

and that this was dedicated to ian...what a
disaster..you should be ashames..he is probably roling in his grave from shame
right now....

It's nice to be appreciated.

June 06, 2017 21:59

June 20, 2017

Steve McIntyre

So, Stretch happened...

Things mostly went very well, and we've released Debian 9 this weekend past. Many many people worked together to make this possible, and I'd like to extend my own thanks to all of them.

As a project, we decided to dedicate Stretch to our late founder Ian Murdock. He did much of the early work to get Debian going, and inspired many more to help him. I had the good fortune to meet up with Ian years ago at a meetup attached to a Usenix conference, and I remember clearly he was a genuinely nice guy with good ideas. We'll miss him.

For my part in the release process, again I was responsible for producing our official installation and live images. Release day itself went OK, but as is typical the process ran late into Saturday night / early Sunday morning. We made and tested lots of different images, although numbers were down from previous releases as we've stopped making the full CD sets now.

Sunday was the day for the release party in Cambridge. As is traditional, a group of us met up at a local hostelry for some revelry! We hid inside the pub to escape from the ridiculouly hot weather we're having at the moment.

Party

Due to a combination of the lack of sleep and the heat, I nearly forgot to even take any photos - apologies to the extra folks who'd been around earlier whom I missed with the camera... :-(

June 06, 2017 22:21

June 18, 2017

Gema Gomez

mr-provisioner

Over the past two months I have been working on a tool to help my team provision ARM64 hardware with specific images so they can be tested. We currently have labs with OpenStack installed over a few servers, we have labs for testing that need to be reprovisioned regularly depending on what is being tested. We didn’t have a standard way to install across data centers, get repeatable and reliable test results, share and test systems across vendors and architectures. So this was the first step on the quest of setting up a reliable infrastructure lab that will help, amongst others with 3rd party CI for OpenStack.

The initial requirements were simple:

  • being able to assign servers to users for testing without having to share admin rights on the infrastructure (controlled test environment)
  • install servers with kernel/initrd of the engineer’s choosing
  • ability to preseed/kickstart the installs by the engineer and debug end to end
  • remote console access to the servers
  • asset management of the lab integrated, rather than an external spreadsheet
  • useful for manual (UI) and automated (API) testing
  • generic tool, not vendor specific
  • easy to use

Previously in the data center we were using tftp servers with manually uploaded images that some admin would upload at the engineer’s request, not very consistent versioning, manually updated grub configs with options that were not consistent.

Now we have a tool, under heavy development, that allows us to do all of the above and we have open sourced it for anyone to be able to use or contribute.

More information

Design documentation: readthedocs

Source code: https://github.com/linaro/mr-provisioner

by Gema Gomez at June 06, 2017 23:00

May 26, 2017

Siddhesh Poyarekar

The story of tunables

This is long overdue and I have finally got around to writing this. Apologies to everyone who asked me to write about it and I responded with "Oh yeah, right away!" If you are not interested in the story bits, start with So what are tunables anyway below.

The story of tunables began in 2013 when I was a relatively fresh glibc engineer in the Red Hat toolchain team. We wanted to add an environment variable to allow users to set the default stack sizes for thread stacks and Carlos took that idea to the next level with the question: How do we make this more extensible so that we have full control over the kind of tuning parameters we accept in glibc but at the same time, allow distributions to add their own tuning parameters without affecting upstream code? He asked this question in the 2013 Cauldron in Mountain View, where the famous glibc BoF happened in a tiny meeting room which overflowed into an adjacent room, which also filled up quickly, and then the BoF overran its 45 minute slot by roughly a couple of hours! Carlos joined the BoF over Hangout (I think it was called Google Talk then) because he couldn’t make it and we had a lengthy back and forth about the pros and cons of having such tuning parameters. In principle, everybody agreed that such a thing would be desirable from a maintenance perspective. However the approach for doing it was something nobody seemed to agree on.

Thus the idea of tunables was born 4 years ago, except that Carlos wrote the first wiki page and called it ‘tunnables’. He consistently spelled it tunnables and I tunables. I won in the end because I wrote the patches ;)

Jokes aside, we were happy about the reception of the idea and we went about documenting it at length. However given that we were a two man army manning the glibc bunkers in Red Hat and the fact that upstream was still reviving itself from the post-Uli era meant that we would never come back to it for a while.

Then 2015 happened and it came with a memorable Cauldron in Prague. It was memorable because by then I had come up with a first draft of an API for the tunables framework. It was also memorable because it was my last month at Red Hat, something I never imagined would ever happen. I was leaving my dream team and I wasn’t sure if I would ever be as happy again. Those uncertainties were unfounded as I know now, but that’s a story for another post.

The struggle to write code

The first draft I presented at Cauldron in 2015 was really just a naive attempt at storing and initializing public values accessed across libraries in glibc and we had not even thought through everything we would end up fixing with tunables. It kinda worked, but it was never going to make the cut. A new employer meant that tunables will become a weekend project and as a result it missed the release deadline. And another, and then another. Towards the closing of every release I would whip out a patchset that would be poked holes into and then the change would be considered too risky to include.

Finally we set a deadline of 2.25 for tunables because by then quite a few devs had started maintaining their own list of tunables on top of my tree, frustratingly rebasing every time I completely changed my approach. We made it in the end, with Florian and I working through the year end holidays to get the whole patchset in before freeze.

So as of 2.25, tunables is firmly entrenched into glibc and as we speak, there are more tunables to come, especially to override IFUNC selections and to tune the processor capability mask.

So what are tunables anyway?

This is where you start if you want the technical description and are not interested in the story bits.

Tunables is an internal implementation detail in glibc. It is a way to manage ways in which we allow behaviour in glibc to be modified. As of now the only way to manage glibc is via environment variables and the way to do that was strewn all over the place in the source code. Tunables provide one place to add the tunable parameter with all of the characteristics it would have and then the framework will handle everything from there. The user of that tunable (e.g. malloc for MALLOC_MMAP_THRESHOLD_ or malloc.mmap.threshold in tunables parlance) would then simply access the tunable from the list and do what it wants to do, without bothering about where it came from.

The framework is implemented in elf/dl-tunables.c and all of the supporting code is named as elf/dl-tunable*. As is evident, tunables is linked into the dynamic linker, where it is initialized very early. In static binaries, the initialization is done in libc-start.c, again early enough to influence almost everything in the program. The list is initialized just once and is modifiable only in the dynamic linker before it relocates itself.

The main list of tunables is maintained in elf/dl-tunables.list. Architectures may define their own tunables in sysdeps/…/dl-tunables.list. There is a README.tunables that lists out the gory details of using tunables within glibc to access its values and if necessary, update it.

This gives us a number of advantages, some of them being the following:

Single Initialization

All environment variables used by glibc would be read in by a single double-nested loop which initializes all tunables. Accesses are then just a GOT away, so no more getenv loops in glibc code. This is not achieved yet since all of the environment variables are not yet ported to tunables (Hint: here’s a nice project for you, you aspiring glibc developer!)

All tunables are listed in a single file

The file elf/dl-tunables.list has a full list of tunables along with its properties such as type, value range, default value and its behaviour with setuid binaries. This caused us to introspect on each environment variable we ported into tunables and we ended up fixing a few bugs as well.

Very Early Initialization

Yes, very early, earlier than you would imagine, earlier than IFUNCs! *gasp*

Tunables get initialized very early so that they can influence almost every behaviour in glibc. The unreleased 2.26 makes this even earlier (or rather, delays CPU features initialization enough) so that tunables can impact selection of routines using IFUNCs. This fixes an important inconsistency in glibc, where LD_HWCAP_MASK was read in dynamically linked binaries but not in static binaries because it was not read in early enough.

relro

The tunable list is read-only, so glibc reads from a list that cannot be tampered by malicious code that gets loaded after relocation.

What changes for me as a user?

The change in 2.25 is minimal enough that you won’t notice. In this release, only the malloc tuning environment variables have been ported to tunables and if you’ve been using those environment variables before, they will continue to work even now. In addition, you get to tune these parameters in a fancy way that doesn’t require the stupid trailing underscore, using the GLIBC_TUNABLES environment variable. The manual describes it extensively so I won’t go into details.

The major change is about to happen now. Intel is starting to push a number of tunables to allow you to tune your library to your liking, changing things like string routines that get selected for your program, cache parameters, etc. I believe PowerPC and S390 will see something simila too in the lock elision space and aarch64 multiarch will be tunable as well. All of this will hopefully come in 2.26 or latest by 2.27.

One thing to note though is that for now tunables are not covered by any ABI or API guarantees. That is to say, if you like a tunable that is in 2.26, we may well remove the tunable in 2.27 if we find that it either does not make sense to have that tunable exposed or exposing that tunable is somehow detrimental to user programs.

The big difference will likely come in when distributions start adding their own tunables into the mix. since it will allow them to add customizations to the library without having to maintain huge ugly patchsets.

The Road Ahead

The big advantage of collecting all tuning parameters under a single framework is the ability to then add new ways to influence those tuning parameters. We have environment variables now, but we could add other methods to tune the library. Some ideas discussed are as follows:

  • Have a systemwide configuration file (e.g. /etc/sysctl.user.conf) that sets different defaults for some tunables and limits the degree to which specific tunables are altered. This allows systems administrators to have more fine grained control over the processes on their system
  • Have user-specific configuration files (e.g. $HOME/.sysctl.user.conf) that does something similar but at a user level
  • Have some tunables modified during execution via some shared memory mechanism

All of this is still evolving, so if you have an idea or would like to work on any of these ideas, feel free to get in touch with me and we can find a way to get you contributing to one of the most critical parts of the operating system!

by Siddhesh at May 05, 2017 15:33

May 20, 2017

Neil Williams

Software, service, data and freedom

Free software, free services but what about your data?

I care a lot about free software, not only as a Debian Developer. The use of software as a service matters as well because my principle free software development is on just such a project, licensed under the GNU Affero General Public License version 3. The AGPL helps by allowing anyone who is suitably skilled to install their own copy of the software and run their own service on their own hardware. As a project, we are seeing increasing numbers of groups doing exactly this and these groups are actively contributing back to the project.

So what is the problem? We've got an active project, an active community and everything is under a free software licence and regularly uploaded to Debian main. We have open code review with anonymous access to our own source code CI and anonymous access to project planning, open mailing list archives as well as an open bug tracker and a very active IRC channel (#linaro-lava on OFTC). We develop in the open, we respond in the open and we publish frequently (monthly, approximately). The code we write defaults to public visibilty at runtime with restrictions available for certain use cases.

What else can we be doing? Well it was a simple question which started me thinking.

The lava documentation has various example test scripts e.g. https://validation.linaro.org/static/docs/v2/examples/test-jobs/qemu-kernel-standard-sid.yaml

these have no licence information, we've adapted them for a Linux Foundation project, what licence should apply to these files?

Robert Marshall

Those are our own examples, contributed as part of the documentation and covered by the AGPL like the rest of the documentation and the software which it documents, so I replied with the same. However, what about all the other submissions received by the service?

Data Freedom

LAVA acts by providing a service to authenticated users. The software runs your test code on hardware which might not be available to the user or which is simply inconvenient for the test writer to setup themselves. The AGPL covers this nicely.

What about the data contributed by the users? We make this available to other users who will, naturally, copy and paste for their own tests. In most cases, because the software defaults to public access, anonymous users also get to learn from the contributions of other test writers. This is a good thing and to be encouraged. (One reason why we moved to YAML for all submissions was to allow comments to help other users understand why the submission does specific things.)

Writing a test job submission or a test shell definition from scratch is a non-trivial amount of work. We've written dozens of pages of documentation covering how and how not to do it but the detail of how a test job runs exactly what the test writer requires can involve substantial effort. (Our documentation recommends using version control for each of these works for exactly these reasons.)

At what point do these works become software? At what point do these need licensing? How could that be declared?

Perils of the Javascript Trap approach

When reading up on the AGPL, I also read about Service as a Software Substitute (SaaSS) and this led to The Javascript Trap.

I don't consider LAVA to be SaaSS although it is Software as a Service (SaaS). (Distinguishing between those is best left to the GNU document as it is an almighty tangle at times.)

I did look at the GNU ideas for licensing Javascript but it seems cumbersome and unnecessary - a protocol designed for the specific purposes of their own service rather than as a solution which could be readily adopted by all such services.

The same problems affect trying to untangle sharing the test job data within LAVA.

Adding Licence text

The traditional way, of course, is simply to add twenty lines or so of comments at the top of every file. This works nicely for source code because the comments are hidden from the final UI (unless an explicit reference is made in the --help output or similar). It is less nice for human readable submissions where the first thing someone has to do is scroll passed the comments to get to what they want to see. At that point, it starts to look like a popup or a nagging banner - blocking the requested content on a website to try and get the viewer to subscribe to a newsletter or pay for the rest of the content. Let's not actively annoy visitors who are trying to get things done.

Adding Licence files

This can be done in the remote version control repository - then a single line in the submitted file can point at the licence. This is how I'm seeking to solve the problem of our own repositories. If the reference URL is included in the metadata of the test job submission, it can even be linked into the test job metadata and made available to everyone through the results API.

metadata:
  licence.text: http://mysite/lava/git/COPYING
  licence.name: BSD 3 clause

Metadata in LAVA test job submissions is free-form but if the example was adopted as a convention for LAVA submissions, it would make it easy for someone to query LAVA for the licences of a range of test submissions.

Currently, LAVA does not store metadata from the test shell definitions except the URL of the git repo for the test shell definition but that may be enough in most cases for someone to find the relevant COPYING or LICENCE file.

Which licence?

This could be a problem too. If users contribute data under unfriendly licences, what is LAVA to do? I've used the BSD 3 clause in the above example as I expect it to be the most commonly used licence for these contributions. A copyleft licence could be used, although doing so would require additional metadata in the submission to declare how to contribute back to the original author (because that is usually not a member of the LAVA project).

Why not Creative Commons?

Although I'm referring to these contributions as data, these are not pieces of prose or images or audio. These are instructions (with comments) for a specific piece of software to execute on behalf of the user. As such, these objects must comply with the schema and syntax of the receiving service, so a code-based licence would seem correct.

Results

Finally, a word about what comes back from your data submission - the results. This data cannot be restricted by any licence affecting either the submission or the software, it can be restricted using the API or left as the default of public access.

If the results and the submission data really are private, then the solution is to take advantage of the AGPL, take the source code of LAVA and run it internally where the entire service can be placed within a firewall.

What happens next?

  1. Please consider editing your own LAVA test job submissions to add licence metadata.
  2. Please use comments in your own LAVA test job submissions, especially if you are using some form of template engine to generate the submission. This data will be used by others, it is easier for everyone if those users do not have to ask us or you about why your test job does what it does.
  3. Add a file to your own repositories containing LAVA test shell definitions to declare how these files can be shared freely.
  4. Think about other services to which you submit data which is either only partially machine generated or which is entirely human created. Is that data free-form or are you essentially asking the service to do a precise task on your behalf as if you were programming that server directly? (Jenkins is a classic example, closely related to LAVA.)
    • Think about how much developer time was required to create that submission and how the service publishes that submission in ways that allow others to copy and paste it into their own submissions.
    • Some of those submissions can easily end up in documentation or other published sources which will need to know about how to licence and distribute that data in a new format (i.e. modification.) Do you intend for that useful purpose to be defeated by releasing your data under All Rights Reserved?

Contact

I don't enable comments on this blog but there are enough ways to contact me and the LAVA project in the body of this post, it really shouldn't be a problem for anyone to comment.

by Neil Williams at May 05, 2017 07:24

May 12, 2017

Steve McIntyre

Fonts and presentations

When you're giving a presentation, the choice of font can matter a lot. Not just in terms of how pretty your slides look, but also in terms of whether the data you're presenting is actually properly legible. Unfortunately, far too many fonts are appallingly bad if you're trying to tell certain characters apart. Imagine if you're at the back of a room, trying to read information on a slide that's (typically) too small and (if you're unlucky) the presenter's speech is also unclear to you (noisy room, bad audio, different language). A good clear font is really important here.

To illustrate the problem, I've picked a few fonts available in Google Slides. I've written the characters "1lIoO0" (that's one, lower case L, upper case I, lower case o, upper case O, zero) in each of those fonts. Some of the sans-serif fonts in particular are comically bad for trying to distinguish between these characters.

font examples

It may not matter in all cases if your audience can read all the characters on your slides and tell them apart, put if you're trying to present scientific or numeric results it's critical. Please consider that before looking for a pretty font.

May 05, 2017 22:08

May 05, 2017

Rémi Duraffort

A common mistake with jinja2

Jinja2 is a powerful templating engine for Python.

Inside LAVA, we use Jinja2 to generate configuration files for every boards that we support.

The configuration is generated from a template that does inherit from a base template.

For instance, for a beaglebone-black called bbb-01, the template inheritance tree is the following:

  • devices/bbb-01.jinja2
  • -> device-types/beaglebone-black.jinja2
  • --> device-types/base-uboot.jinja2
  • ---> device-types/base.jinja2

The first template (devices/bbb-01.jinja) is usually a list of variables with their corresponding values for this specific device.

{% extends 'beaglebone-black.jinja2' %}

{% set usb_uuid = 'usb-SanDisk_Ultra_20060775320F43006019-0:0' %}
{% set connection_command = "telnet localhost 6000" %}
{% set hard_reset_command = "/usr/bin/pduclient --daemon localhost --hostname pdu --command reboot --port 08" %}
{% set power_off_command = "/usr/bin/pduclient --daemon localhost --hostname pdu --command off --port 08" %}
{% set power_on_command = "/usr/bin/pduclient --daemon localhost --hostname pdu --command on --port 08" %}

In the base templates we where using:

host: {{ ssh_host|default(localhost) }}
port: {{ ssh_port|default(22) }}

This is in fact wrong. If the variables ssh_host and ssh_port are not defined, the resulting file will be:

host:
port: 22

The default function in Jinja is expecting:

  • a python object (a string, an int, an array, ...)
  • a template variable name

In this case, localhost is interpreted as an undefined template variable name. Hence the result.

The correct template is:

host: {{ ssh_host|default('localhost') }}
port: {{ ssh_port|default(22) }}

That's a really simple mistake that can remain unnoticed for a long time.

by Rémi Duraffort at May 05, 2017 15:21

April 23, 2017

Mark Brown

Bronica Motor Drive SQ-i

I recently got a Bronica SQ-Ai medium format film camera which came with the Motor Drive SQ-i. Since I couldn’t find any documentation at all about it on the internet and had to figure it out for myself I figured I’d put what I figured out here. Hopefully this will help the next person trying to figure one out, or at least by virtue of being wrong on the internet I’ll be able to get someone who knows what they’re doing to tell me how the thing really works.

Bottom plate

The motor drive attaches to the camera using the tripod socket, a replacement tripod socket is provided on the base of plate. There’s also a metal plate with the bottom of the hand grip attached to it held on to the base plate with a thumb screw. When this is released it gives access to the screw holding in the battery compartment which (very conveniently) takes 6 AA batteries. This also provides power to the camera body when attached.

Bottom plate with battery compartment visible

On the back of the base of the camera there’s a button with a red LED next to it which illuminates slightly when the button is pressed (it’s visible in low light only). I’m not 100% sure what this is for, I’d have guessed a battery check if the light were easier to see.

Top of drive

On the top of the camera there is a hot shoe (with a plastic blanking plate, a nice touch), a mode selector and two buttons. The larger button on the front replicates the shutter release button on the body (which continues to function as well) while the smaller button to the rear of the camera controls the motor – depending on the current state of the camera it cocks the shutter, winds the film and resets the mirror when it is locked up. The mode dial offers three modes: off, S and C. S and C appear to correspond to the S and C modes of the main camera, single and continuous mirror lockup shots.

Overall with this grip fitted and a prism attached the camera operates very similarly to a 35mm SLR in terms of film winding and so on. It is of course heavier (the whole setup weighs in at 2.5kg) but balanced very well and the grip is very comfortable to use.

by broonie at April 04, 2017 13:17

April 11, 2017

Riku Voipio

Deploying OBS

Open Build Service from SuSE is web service building deb/rpm packages. It has recently been added to Debian, so finally there is relatively easy way to set up PPA style repositories in Debian. Relative as in "there is a learning curve, but nowhere near the complexity of replicating Debian's internal infrastructure". OBS will give you both repositories and build infrastructure with a clickety web UI and command line client (osc) to manage. See Hectors blog for quickstart instructions.

Things to learned while setting up OBS

Me coming from Debian background, and OBS coming from SuSE/RPM world, there are some quirks that can take by surprise.

Well done packaging

Usually web services are a tough fit for Distros. The cascade of weird dependencies and build systems where the only practical way to build an "open source" web service is by replicating the upstream CI scripts. Not in case of OBS. Being done by distro people shows.

OBS does automatic rebuilds of reverse dependencies

Aka automatic binNMUs when you update a library. This however means you need lots of build power around. OBS has it's own dependency resolver on the server that recalculate what packages need rebuilding when - workers just get a list of packages to install for build-depends. This a major divergence from Debian, where sbuild handles dependencies client side. The OBS dependency handler doesn't handle virtual packages* / alternative build-deps like Debian - you may have to add a specific "Prefer: foo-dev" into the OBS project config to solve alternative choices.

OBS server and worker do http requests in both directions

On startup workers connect to OBS server, open a TCP port and wait requests coming OBS. Having connections both directions is a bit of hassle firewall-wise. On the bright side, no need to setup uploads via FTP here..

Signing repositories is complicated

With Debian 9.0 making signed repositories pretty much mandatory, OBS makes signing rather complicated. obs-signd isn't included in Debian, since it depends on gnupg patch that hasn't been upstreamed. Fortunately I found a workaround. OBS signs release files with /usr/bin/sign -d /path/to/release. Where replacing the obs-signd provided sign command your own script is easy ;)

Git integration is rather bolted-on than integrated

OBS provides a method to integrate with git using services. - There is no clickety UI to link to git repo, instead you make an xml file called _service with osc. There is no way to have debian/ tree in git.

The upstream community is friendly

Including the happiest thanks from an upstream I've seen recently.

Summary

All in all rather satisfied with OBS. If you have a home-grown jenkins etc based solution for building DEB/RPM packages, you should definitely consider OBS. For simpler uses, no need to install OBS yourself, openSUSE public OBS will happily build Debian packages for you.

*How useful are virtual packages anymore? "foo-defaults" packages seem to be the go-to solution for most real usecases anyways.

by Riku Voipio (noreply@blogger.com) at April 04, 2017 20:14

March 23, 2017

Ard Biesheuvel

Project dogfood: my arm64 desktop

As a developer who gets paid to work on improving ARM support in various open source projects, including the Linux kernel, I am used to things like cross compiling, accessing development boards over serial wires and other stuff that is quite common in the embedded world. However, as a LEG engineer, I actually work on systems that are much more powerful, and involve firmware layers and other system software components that are typically associated with a desktop or server PC, and not with a NAS box or a mobile phone. So why am I still using my x86 box to do the actual work?

The reality is that the desktop PC market is not a very appealing market to try and conquer with a new CPU architecture, and conquering the appealing ones is already proving to be hard work. So if the ARM development community wants ARM based workstations, it appears we will have to take matters into our own hands.

Please, do try this at home!

Due to my involvement with the UEFI port of the Celloboard (which is due to ship any day now), I was given a AMD Overdrive B1 development board last year, which is based on the same AMD Seattle SoC (aka Opteron A1100), but has a ATX form factor, a standard ATX power supply connector, two [working] PCIe slots, and onboard SATA (14 ports!) and networking, all of which are fully supported in the upstream Linux kernel.

So what would I need to turn this into a desktop system that is good enough for my day to day work?

The fan

The most annoying thing about switching from embedded/mobile dev boards to ‘server’ dev boards is the bloody fans!! To anyone reading this who is in charge of putting together such systems: a development board is quite likely to spend most of its lifetime within earshot of a developer, rather than in a data center rack. So could we please have quieter fans?!?

</rant>

OK, so the first thing I did was replace the fan with a less noisy one. Do note that the AMD Seattle SoC uses a custom design for the heatsink, so this replacement fan will fit Cello and Overdrive, but not other arm64 based dev boards.

The case

Due to the ATX form factor and ATX power supply connector, there are lots of nice cases to choose from. I chose the smallest one I could find that would still fit a full size ATX board, so I ended up with the Antec Minuet 350, which takes low-profile PCIe cards.

The peripherals

My Overdrive board came with RAM installed, and has networking and SATA built in. So what’s lacking in terms of connectivity for use as a workstation is graphics and USB.

The AMD Seattle SoC has one peculiarity compared to x86 that complicates matters a little here: the RAM is mapped at physical address 0x80_0000_0000 (yes, that’s 9 zeroes), which means there is no 32-bit addressable RAM for PCI DMA. This is something that we could work around using the SMMU (IOMMU in ARM speak), but this is currently not implemented in the UEFI firmware or the Linux kernel, and so we need PCI peripherals that are capable of 64-bit DMA addressing.

For USB, I ended up selecting the SilverStone SST-EC04-P, which ships with a low-profile bracket, and has an onboard connector that can be used to wire up the two USB ports on the front of the case.

For graphics, I looked for a passively cooled, not too recent (for driver support, see below) card with HDMI output, and ended up with the Geforce 210 based MSI N-210, which has a nice, big heatsink (and no fan) and ships with a low profile bracket as well.

Kernel support

The lack of 32-bit addressable RAM for PCI DMA breaks assumptions in quite a few kernel drivers. For the Realtek 8169 Gig-E chip on the CelloBoard, we upstreamed patches that enable 64-bit DMA addressing by default on PCIe versions of the chip.

Much in the same way, I had to fix the nouveau and the ALSA drivers for the Geforce 210. Note that the proprietary, closed source NVidia driver is only available for x86, and so cards that are well supported by the open nouveau driver are strongly preferred.

All these patches have been in mainline since v4.10.

Userland support

‘Userland’ is the word kernel hackers use to refer to everything that executes outside of the kernel. My userland of choice is the Gnome3 desktop, which works quite well on the upcoming Ubuntu version (17.04), but older releases suffer from an annoying SpiderMonkey bug, which is caused by the incorrect assumption on the part of the SpiderMonkey developers that pointers never use more than 47 bits, and that bits 48 and up can be used for whatever you like, as long as you clear them again when trying to dereference the pointer value.

However, the arm64 kernel can be configured to use only 39 bits for virtual addressing, which still leaves plenty of address space and sidesteps the SpiderMonkey bug. This way, older Ubuntu versions are usable as well. I am currently using 16.10.

Firmware support

Now this is where it gets interesting. And I am not just saying that because I work on firmware.

So far, we have enabled everything we need to run an ordinary Ubuntu desktop environment on the Overdrive board. But interacting with the UEFI firmware at boot time still requires a serial cable, and a PC on the other end.

The problem here is driver support. Unlike SATA and USB, which are usually supported by class drivers, network interfaces and graphics cards require UEFI drivers that are specific to the particular chip. For the network interface on my Overdrive, this is a solved problem, given that it is integrated with the SoC, and supported by a driver that AMD have contributed. However, for plug-in peripherals such as my Geforce 210, the driver problem is usually addressed by putting a driver in a so-called option ROM on the card, and exposing it to the firmware in a way that is standardized by the PCI spec.

EFI Byte Code

Back when Intel was conquering the world with Itanium, they foresaw the problem that is affecting the ARM ecosystem today: an x86 driver can only run on a x86 CPU, and an ARM driver can only run on an ARM CPU, but option ROMs do not have unlimited space. Intel solved this by inventing an instruction set called EBC (for EFI byte code), and adding an interpreter for it to the UEFI reference code base. In theory, this allows expansion card vendors to recompile their code using an EBC compiler, and flash it into the option ROMs, so that the cards can be used on any architecture.

In reality, though, EBC is not widely used, is not promoted anymore by Intel (now that Itanium is dead), and even if expansion card vendors could get their hands on the compiler (which is not offered for sale anymore), recompiling source code that is riddled with x86 architecture (or rather, PC platform) based assumptions is not guaranteed to produce a driver that works on other architectures, especially ones with weakly ordered memory that does not start at address 0x0. For graphics cards in particular, the situation is even worse, given that many cards ship with a legacy VBIOS ROM (which requires legacy PC-BIOS compatibility in the x86 UEFI firmware) rather than a UEFI driver built for x86.

And indeed, it turned out that my nice low profile passively cooled Geforce 210 card did not have a UEFI driver in the option ROM, but only a legacy VBIOS driver.

X86 emulation in UEFI

Fortunately, I have another GeForce 210 card that does have a UEFI driver in its option ROM. So I dumped the ROM and extracted the driver, only to find out – not entirely unexpectedly, given the above – that it was a x86 driver, not a EBC driver, and so it is not supported on UEFI for 64-bit ARM.

So when Alexander Graf (of Suse) approached me at Linaro Connect two weeks ago, to discuss the possibilities of running x86 option ROMs on an emulator inside UEFI, I was skeptical at first, but after some more thought and discussion, I decided it was worth a try. Over the past ten days, we have collaborated online, and managed to implement an X86 emulator inside UEFI, based on an old version of QEMU (which is still LGPL licensed) combined with the more recent AArch64 support (whose copyright is owned by HiSilicon)

While this does not solve the problem of crappy drivers that make PC platform based assumptions, it works quite reliably for some network drivers we have tried, and even performs a lot better than EBC (which is a straight interpreter rather than a JIT).

And of course, it allows me to boot my Overdrive B1 in graphical mode.

by ardbiesheuvel at March 03, 2017 18:07

March 19, 2017

Siddhesh Poyarekar

Hello FOSSASIA: Revisiting the event *and* the first program we write in C

I was at FOSSAsia this weekend to deliver a workshop on the very basics of programming. It ended a pretty rough couple of weeks for me, with travel to Budapest (for Linaro Connect) followed immediately by the travel to Singapore. It seems like I don’t travel east in the timezone very well and the effects were visible with me napping at odd hours and generally looking groggy through the weekend at Singapore. It was however all worth it because despite a number of glitches, I had some real positives to take back from the conference.

The conference

FOSSAsia had been on my list of conferences to visit due to Kushal Das telling me time and again that I’d meet interesting people there. I had proposed a talk (since I can’t justify the travel just to attend) a couple of years ago but dropped out since I could not find sponsors for my talk and FOSSAsia was not interested in sponsoring me either. Last year I met Hong at SHD Belgaum and she invited me to speak at FOSSAsia. I gladly accepted since Nisha was going to volunteer anyway. However as things turned out in the end, my talk got accepted and I found sponsorship for travel and stay (courtesy Linaro), but Nisha could not attend.

I came (I’m still in SG, waiting for my flight) half-heartedly since Nisha did not accompany me, but the travel seemed worth it in the end. I met some very interesting people and was able to deliver a workshop that I was satisfied with.

Speaking of the workshop…

I was scheduled to talk on the last day (Sunday) first thing in the morning and I was pretty sure I was going to be the only person standing with nobody in their right minds waking up early on a Sunday for a workshop. A Sunday workshop also meant that I knew the venue and its deficiencies - the “Scientist for a Day” part of the Science Center was a disaster since it was completely open and noisy, with lunch being served right next to the room on the first day. I was wary of that, but the Sunday morning slot protected me from that and my workshop personally without such glitches.

The workshop content itself was based on an impromptu ‘workshop’ I did at FUDCon Pune 2015, but a little more organized. Here’s a blow by blow account of the talk for those who missed it, and also a reference for those who attended and would like a reference to go back to in future.

Hell Oh World

It all starts with this program. Hello World is what we all say when we are looking to learn a new language. However, after Hello World, we move up to learn the syntax of the language and then try to solve more complex user problems, ignoring the wonderful things that happened underneath Hello World to make it all happen. This session is an attempt to take a brief look into these depths. Since I am a bit of a cynic, my Hello World program is slightly different:

#include <stdio.h>

int
main (void)
{
  printf ("Hell Oh World!\n");
  return 0;
}

We compile this program:

$ gcc -o helloworld helloworld.c

We can see that the program prints the result just fine:

$ ./helloworld 
Hell Oh World!

But then there is so much that went into making that program. Lets take a look at the binary by using a process called disassembling, which prints the binary program into a human-readable format - well at least readable to humans that know assembly language programming.

$ objdump -d helloworld

We wrote only one function: main, so we should see only that. Instead however, we see so many functions that are present in the binary In fact, you you were lied to when they told back in college that main() is the entry point of the program! The entry point is the function called _start, which calls a function in the GNU C Library called __libc_start_main, which in turn calls the main function. When you invoke the compiler to build the helloworld program, you’re actually running a number of commands in sequence. In general, you do the following steps:

  • Preprocess the source code to expand macros and includes
  • Compile the source to assembly code
  • Assemble the assembly source to binary object code
  • Link the code against its dependencies to produce the final binary program

let us look at these steps one by one.

Preprocessing

gcc -E -o helloworld.i helloworld.c

Run this command instead of the first one to produce a pre-processed file. You’ll see that the resultant file has hundreds of lines of code and among those hundreds of lines, is this one line that we need: the prototype for printf so that the compiler identifies the call printf:

extern int printf (const char *__restrict __format, ...);

It is possible to just use this extern decl and avoid including the entire header file, but it is not good practice. The overhead of maintaining something like this is unnecessary, especially when the compiler can do the job of eliminating the unused bits anyway. We are better off just including a couple of headers and getting all declarations.

Compiling the preprocessed source

Contrary to popular belief, the compiler does not compile into binary .o - it only generates assembly code. It then calls the assembler in the binutils project to convert the assembly into object code.

$ gcc -S -o helloworld.s helloworld.i

The assembly code is now just this:

    .file   "helloworld.i"
    .section    .rodata
.LC0:
    .string "Hell Oh World!"
    .text
    .globl  main
    .type   main, @function
main:
.LFB0:
    .cfi_startproc
    pushq   %rbp
    .cfi_def_cfa_offset 16
    .cfi_offset 6, -16
    movq    %rsp, %rbp
    .cfi_def_cfa_register 6
    movl    $.LC0, %edi
    call    puts
    movl    $0, %eax
    popq    %rbp
    .cfi_def_cfa 7, 8
    ret
    .cfi_endproc
.LFE0:
    .size   main, .-main
    .ident  "GCC: (GNU) 6.3.1 20161221 (Red Hat 6.3.1-1)"
    .section    .note.GNU-stack,"",@progbits

which is just the main function and nothing else. The interesting thing there though is that the printf function call is replaced with puts because the input to printf is just a string without any format and puts is much faster than printf in such cases. This is an optimization by gcc to make code run faster. In fact, the code runs close to 200 optimization passes to attempt to improve the quality of the generated assembly code. However, it does not add all of those additional functions.

So does the assembler add the rest of the gunk?

Assembling the assembly

gcc -c -o helloworld.o helloworld.s

Here is how we assemble the generated assembly source into an object file. The generated assembly can again be disassembled using objdump and we see this:

helloworld.o:     file format elf64-x86-64


Disassembly of section .text:

0000000000000000 :
   0:   55                      push   %rbp
   1:   48 89 e5                mov    %rsp,%rbp
   4:   bf 00 00 00 00          mov    $0x0,%edi
   9:   e8 00 00 00 00          callq  e 
   e:   b8 00 00 00 00          mov    $0x0,%eax
  13:   5d                      pop    %rbp
  14:   c3                      retq   

which is no more than what we saw with the compiler, just in binary format. So it surely is the linker adding all of the gunk.

Putting it all together

Now that we know that it is the linker adding all of the additional stuff into helloworld, lets look at how gcc invokes the linker. To do this, we need to add a -v to the gcc command. You’ll get a lot of output, but the relevant bit is this:

$ gcc -v -o helloworld helloworld.c
...

...
/usr/libexec/gcc/x86_64-redhat-linux/6.3.1/collect2 -plugin /usr/libexec/gcc/x86_64-redhat-linux/6.3.1/liblto_plugin.so -plugin-opt=/usr/libexec/gcc/x86_64-redhat-linux/6.3.1/lto-wrapper -plugin-opt=-fresolution=/tmp/ccEdWzG5.res -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lc -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s --build-id --no-add-needed --eh-frame-hdr --hash-style=gnu -m elf_x86_64 -dynamic-linker /lib64/ld-linux-x86-64.so.2 -o helloworld /usr/lib/gcc/x86_64-redhat-linux/6.3.1/../../../../lib64/crt1.o /usr/lib/gcc/x86_64-redhat-linux/6.3.1/../../../../lib64/crti.o /usr/lib/gcc/x86_64-redhat-linux/6.3.1/crtbegin.o -L/usr/lib/gcc/x86_64-redhat-linux/6.3.1 -L/usr/lib/gcc/x86_64-redhat-linux/6.3.1/../../../../lib64 -L/lib/../lib64 -L/usr/lib/../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/6.3.1/../../.. /tmp/cc3m0We9.o -lgcc --as-needed -lgcc_s --no-as-needed -lc -lgcc --as-needed -lgcc_s --no-as-needed /usr/lib/gcc/x86_64-redhat-linux/6.3.1/crtend.o /usr/lib/gcc/x86_64-redhat-linux/6.3.1/../../../../lib64/crtn.o
COLLECT_GCC_OPTIONS='-v' '-o' 'helloworld' '-mtune=generic' '-march=x86-64'

This is a long command, but the main points of interest are all of the object files (*.o) that get linked in because the linker concatenates those and then resolves dependencies of unresolved references to functions (only puts in this case) among those and all of the libraries (libc.so via -lc, libgcc.so via -lgcc, etc.). To find out which of the object code files have the definition of a specific function, say, _start, disassemble each of them. You’ll find that crt1.o has the definition.

Static linking

Another interesting thing to note in the generated assembly is that the call is to puts@plt, which is not exactly puts. It is in reality a construct called a trampoline, which helps the code jump to the actual printf function during runtime. We need this because printf is actually present in libc.so.6, which the binary simply claims to need by encoding it in the binary. To see this, disassemble the binary using the -x flag:

$ objdump -x helloworld

helloworld:     file format elf64-x86-64
helloworld
architecture: i386:x86-64, flags 0x00000112:
EXEC_P, HAS_SYMS, D_PAGED
start address 0x0000000000400430
...
Dynamic Section:
  NEEDED               libc.so.6
...

This is dynamic linking. When a program is executed, what is actually called first is the dynamic linker (ld.so), which then opens all dependent libraries, maps them into memory, and then calls the _start function in the program. During mapping, it also fills in a table of data called the Global Offset Table with offsets of all of the external references (puts in our case) to help the trampoline jump to the correct location.

If you want to be independent of the dynamic linker, then you can link the program statically:

$ gcc -static -o helloworld helloworld.c

This will however result in bloating of the program and also has a number of other disadvantages, like having to rebuild for every update of its dependent libraries and sub-optimal performance since the kernel can no longer share pages among processes for common code.

BONUS: Writing the smallest program

The basics were done with about 10 minutes to spare, so I showed how one could write the smallest program ever. In principle, the smallest program in C is:

int
main (void)
{
  return 42;
}

As is evident though, this pulls in everything from the C and gcc libraries, so it is clearly hard to do this in C, so lets try it in assembly. We already know that _start is the main entry point of the program, so we need to implement that function. To exit the program, we need to tell the kernel to exit by invoking the exit_group syscall, which has syscall number 231. Here is what the function looks like:

.globl _start
_start:
    mov $0xe7, %rax
    mov $0x42, %rdi
    syscall

We can build this with gcc to get a very small binary but to do this, we need to specify that we don’t want to use the standard libraries:

gcc -o min -nostdlib min.s

The resultant file is 864 bytes, as opposed to the 8.5K binary from the C program. We can reduce this further by invoking the assembler and linker directly:

$ as -o min.o min.s
$ ld -o min min.o

This results in an even smaller binary, at 664 bytes! This is because gcc puts some extra meta information in the binary to identify its builds.

Conclusion

At this point we ran out of time and we had to cut things short. It was a fun interaction because there were even a couple of people with Macbooks and we spotted a couple of differences in the way the linker ran due to differences in the libc, despite having the same gcc installed. I wasn’t able to focus too much on the specifics of these differences and I hope they weren’t a problem for the attendees using Macs. In all it was a satisfying session because the audience seemed happy to learn about all of this. It looked like many of them had more questions (and wonderment, as I had when I learned these things for the first time) in their mind than they came in with and I hope they follow up and eventually participate in Open Source projects to fulfill their curiosity and learn further.

by Siddhesh at March 03, 2017 17:15

February 28, 2017

Gema Gomez

Atlanta PTG, OpenStack

Last week was all about OpenStack and making sure my team from Linaro/ARM was present on all the relevant sessions. There was only two of us there and with so many sessions happening at the same time, it became obvious that we would only be able to cover so much ground, so we decided to focus on the topics that are important for us this cycle.

Some of the most interesting conversations for me happened outside the sessions themselves. I found out about the 3rd party CI team and decided to join them on their weekly meetings from now on. This is important for Linaro because one of our main priorities is to get 3rd party CI for OpenStack set up on AArch64 hardware.

We were also working on the automatic set up of tempest for RefStack users. Right now the set up of tempest has a very steep learning curve. We want to make this as low as possible so that people can start testing without having a very deep understanding of tempest. See point 3 of action items on this etherpad. Having changed test environments and test clouds my fair share of times, this is a topic very close to my heart. There is a lot of engineering time going into configuring tempest properly on all sort of different environments, so trying to minimise this should save time in the long run to many teams.

Vertical teams

As discussed on the previous post, the first two days were about Kolla and ramping up on the project’s priorities for us. The Kolla meetings were well organised and driven, making us feel like we were using our time wisely by being there.

Some interesting topics from these discussions:

Horizontal teams

During the horizontal teams days, I attended mostly Ironic sessions. This is a comprehensive summary of the discussions (https://etherpad.openstack.org/p/ironic-pike-ptg-ongoing-work).

One of the main issues in Ironic is to get good reviewers/core developers involved in the project. They were having very interesting discussions around how they may or may not have enough time to review things that are not in their current roadmap, but they feel like they should make the time for this. Becoming core developer of any OpenStack project does require a lot of time and dedication. Reviews of code from others, not only the code that you may be interested in is required. Also, Ironic has a variety of core developers that may review code from very different angles, all those are valuable, but there is a feeling that people contributing may find them a bit of a hit and miss when trying to get a patch landed.

The Ironic team is going to work on a list of recommendations for new contributors to be able to join the efforts in a more seamless way.

There were also discussions about how to deprecate the Ironic client and move to the new OpenStack client over the coming cycles.

The Ironic UI is a good place for new contributors to make a positive impact in the project. There is a list of features on a google doc that are there for new contributors to work on, the person to coordinate with on irc is TheJulia.

I also attended a couple of Nova sessions. My take from those was about quotas and how quotas may be breaking compatibility on Pike, for more information see etherpad. another interesting topic was the Nova REST API discussion, see etherpad.

Summary

Overall I am quite pleased with how the PTG was organised and run. It was up to the different PTLs to decide how to run the sessions and the ones I attended were mostly productive.

One problem I had, during the horizontal days, is that I could only really focus on one project. Other years I have attended the midcycles for Interop WG and separately for Infra/QA, and that gave me time to be part of both conversations, whilst now I have to choose just one horizontal team, so my involvement with Interop and Infra/QA was minor this cycle, due to the need to focus on Kolla. Funnily enough, the weekly IRC meetings for Interop and Kolla also coincide, so I have been having to choose between the two for a few weeks now. Having to choose between horizontal teams is not my preferred choice, I would have preferred to be able to attend Interop and Infra/QA even if it meant travelling an extra time.

I was however able to attend the Ironic meetings, which I wouldn’t have if I hadn’t been in Atlanta last week, as our involvement with that projec is not big enough to justify going to a midcycle for it.

Another lesson learnt, I didn’t need to be there on Friday, since the sessions I was interested in pretty much winded down on Thursday.

I would have liked to have a t-shirt from this event, but we got project mascot stickers instead. My laptop surely liked this, as it doesn’t wear t-shirts well.

It was overall a great week and I got a voucher for ODS in Boston that I intend to use.

by Gema Gomez at February 02, 2017 00:00

February 22, 2017

Gema Gomez

Project Teams Gathering OpenStack - First thoughts

I am in Atlanta this week at the first OpenStack PTG meetings. Since this is the first meeting of this kind, I didn’t really know what to expect. We had a schedule with a lot of project meetings happening at the same time. The first two days have been all about horizontal teams, this is teams that interact with all the other teams in one way or another.

I have been busy at the OpenStack Kolla meetings this time around. I tried to attend a few discussions of the Interop/Refstack meeting, but I couldn’t really keep up with both, so I decided to focus in Kolla, which is going to be where our contributions are going to go mainly for OpenStack Pike. Kolla is a project that produces Docker containers and scripts to be able to easily install OpenStack. Up until now we were producing our own packages for Debian and CentOS, but this has become difficult to maintain and it doesn’t scale very well. Contributing AArch64 containers to Kolla and helping the project become truly multiarch seems to be the way forward.

My team has a blueprint that we are working towards. The Kolla team have welcomed us and our contributions and are being very helpful getting us up to speed to be able to contribute effectively.

The first two days have been very productive, here is a link to the etherpad with the conversations we have been having: https://etherpad.openstack.org/p/kolla-pike-ptg-schedule

Now ready to start attending the vertical team’s meetings. This should be easier for me as I don’t need to be everywhere at once.

Here is a picture of the Interop Working Group, taken on Tuesday of the PTG: Interop WG

by Gema Gomez at February 02, 2017 00:00

February 16, 2017

Ard Biesheuvel

Time invariant AES

Rule #1 of crypto club: don’t roll your own

Kernel hackers are usually self-righteous bastards who think that they are smarter than everyone else (and I am no exception). Sometimes, it’s hard to fight the urge to reimplement something simply because you think you would have done a slightly better job. On the enterprise scale, this is usually referred to as not invented here syndrome (NIH), and in the past, I have worked for companies where this resulted in entire remote procedure call (RPC) protocol stacks being developed, including a home cooked interface definition language (IDL) and a compiler (yes, I am looking at you, TomTom).

The EDK2 open source project is also full of reinvented wheels, where everything from the build tools to string libraries have been implemented and even invented from scratch. But when it came to incorporating crypto into the code base, they did the right thing, and picked the OpenSSL library, even if this meant putting the burden on the developer to go and find the correct tarball and unpack it in the right place. (Due to license incompatibilities, merging the OpenSSL code into the EDK2 tree would render it undistributable.)

The bottom line, of course, is that you are not smarter than everyone else, and in fact, that there are very smart people out there whose livelihood depends on breaking your supposedly secure system. So instead of reimplementing existing crypto algorithms, or, god forbid, inventing ‘better’ ones, you can spend your time more wisely and learn about existing algorithms and how to use them correctly.

Rule #2 of crypto club: read the manual

Not all encryption modes are suitable for all purposes. For instance, symmetric stream ciphers such as RC4, or AES in CTR mode, should never reuse the same combination of key and initialization vector (IV). This makes stream ciphers mostly unsuitable for disk encryption, which typically derives its IV from the sector number, and sectors are typically written to more than once. (The reason is that, since the key stream is xor’ed with the plaintext to obtain the ciphertext, two ciphertexts encrypted with the same key and IV xor’ed with each other will produce the same value as the two plaintexts xor’ed together, which means updates to disk blocks are essentially visible in the clear. Ouch.)

Many other algorithms have similar limitations: DES had its weak keys, RSA needs padding to be safe, and DSA (as well as ElGamal encryption) should not reuse its k parameter, or its key can be trivially factored out.

Algorithm versus implementation

Unfortunately, we are not there yet. Even after having ticked all the boxes, we may still end up with a system that is insecure. One notable example is AES, which is superb in all other aspects, but, as Daniel J. Bernstein claimed in this paper in 2005, its implementation may be vulnerable to attacks.

In a nutshell, Daniel J. Bernstein’s paper shows that there is an exploitable correlation between the key and the response time of a network service that involves AES encryption, but only when the plaintext is known. This is due to the fact that the implementation performs data dependent lookups in precomputed tables, which are typically 4 – 8 KB in size (i.e., much larger than a typical cacheline), resulting in a variance in the response time.

This may sound peculiar, i.e., if the plaintext is known, what is there to attack, right? But the key itself is also confidential, and AES is also used in a number of MAC algorithms where the plaintext is usually not secret to begin with. Also, the underlying structure of the network protocol may allow the plaintext to be predicted with a reasonable degree of certainty.

For this reason, OpenSSL (which was the implementation under attack in the paper), has switched to time invariant AES implementations as much as possible.

Time invariant AES

On 64-bit ARM, we now have three separate time invariant implementations of AES, one based on the ARMv8 Crypto Extensions and two that are NEON based. On 32-bit ARM, however, the only time invariant AES implementation is the bit sliced NEON one, which is very inefficient when operating in sequential modes such as CBC encryption or CCM/CMAC authentication. (There is an ARMv8 Crypto Extensions implementation for 32-bit ARM as well, but that is currently only relevant for 32-bit kernels running on 64-bit hardware.)

So for Linux v4.11, I have implemented a generic, [mostly] time invariant AES cipher, that should eliminate variances in AES processing time that are correlated with the key. It achieves this by choosing a slightly slower algorithm that is equivalent to the table based AES, but uses only 256 bytes of lookup data (the actual AES S-box), and mixes some S-box values at fixed offsets with the first round key. Every time the key is used, these values need to be xor’ed again, which will pull the entire S-box into the D-cache, hiding the lookup latency of subsequent data dependent accesses.

So if you care more about security than about performance when it comes to networking, for instance, for unmonitored IoT devices that listen for incoming network connections all day, my recommendation is to disable the table based AES, and use the fixed time flavour instead.

# CONFIG_CRYPTO_AES_ARM is not set
CONFIG_CRYPTO_AES_TI=y

The priority based selection rules will still select the much faster NEON code when possible (provided that the CPU has a NEON unit), but this is dependent on the choice of chaining mode.

Algorithm Resolved as
Disk encryption xts(aes) xts-aes-neonbs
mac80211 CMAC cmac(aes) cmac(aes-fixed-time)
VPN ccm(aes) ccm_base(ctr-aes-neonbs,cbcmac(aes-fixed-time))

by ardbiesheuvel at February 02, 2017 09:33