No Relation To Blog
Subscribe to the personal musing of Emmanuel Bernard
You can start IntelliJ IDEA from the command line which is handy when you live in a terminal like me. But you need to enable that feature.
Open IntelliJ IDEA, go to
Tools->Create Command-Line Launcher... and optionally adjust the location and name of the script that will start IntelliJ IDEA.
Now from your command line, you can type:
idea .to open the project in the current directory
idea pom.xmlto import the Maven project
idea diff <left> <right>to launch the diff tool.
The generated script has an annoying flaw though, it does reference your preference and cache directories in a hard coded fashion.
And for some reason the IntelliJ folks embed the version number in these directories (e.g.
That's annoying as it will likely break the minute you move to another (major?) version.
Antonio has a solution for that which is a simpler and more forgiving script in good anti-fragile fashion. The script is not generic and only runs for macOS.
#!/bin/sh # check for where the latest version of IDEA is installed IDEA=`ls -1d /Applications/IntelliJ\ * | tail -n1` wd=`pwd` # were we given a directory? if [ -d "$1" ]; then # echo "checking for things in the working dir given" wd=`ls -1d "$1" | head -n1` fi # were we given a file? if [ -f "$1" ]; then # echo "opening '$1'" open -a "$IDEA" "$1" else # let's check for stuff in our working directory. pushd $wd > /dev/null # does our working dir have an .idea directory? if [ -d ".idea" ]; then # echo "opening via the .idea dir" open -a "$IDEA" . # is there an IDEA project file? elif [ -f *.ipr ]; then # echo "opening via the project file" open -a "$IDEA" `ls -1d *.ipr | head -n1` # Is there a pom.xml? elif [ -f pom.xml ]; then # echo "importing from pom" open -a "$IDEA" "pom.xml" # can't do anything smart; just open IDEA else # echo 'cbf' open "$IDEA" fi popd > /dev/null fi
The GitHub gist version of this script.
It does not offer the call to IDEA's
I'm from an era where we did resolve
> based diff conflicts in Notepad so that does not bother me much.
I think I'll go for Antonio's solution, that will avoid some nasty WTF moments when the preference directory moves and I will have forgotten all of this.
Je n'avais jamais bien compris pourquoi les baby boomers étaient un tel problème pour les retraites des plus jeunes. Mon hypothèse était que la retraite des baby boomers est payée par des actifs dans un système à répartition comme en France. Donc le moment difficile c'est le moment de vie active où la pression de paiement est la plus forte. Mais ces baby boomers, ils vont bien finir par mourir, et donc le système reviendra à la normale. Ce raisonnement est correct mais il y a plusieurs facteurs aggravants.
Premièrement, le phénomène de baby boom est plus long que ce que je pensais. Il correspond aux naissances de 1945 à 1975 et à un surplus de naissance de ~ 20-25% par an par rapport à la moyenne des naissances post 1975 (à la louche). Donc c'est un gros morceau à la fois dans le temps et dans le volume supplémentaire par an. Les baby boomers ont commencé à entrer en retraite il y à 5 ans et cela va se poursuivre quasi jusqu'à ce que moi je rentre en retraite (je ne suis pas né au meilleur moment apparemment).
Deuxièmement, ce phénomène n'est pas compensé (ni aggravé d'ailleurs) par les naissances et l'immigration. Pour faire simple, les 1,95 enfants par femme et la légère immigration que l'on a devrait suffire à garder les tranches d'age stables. Cela donne une base rectangulaire à la pyramide des ages. Cela dit si on faisait un peu plus de bébés ou si on accueillait un peu plus d'immigration, ma retraite se porterait mieux, CQFD.
Et évidemment, les gens meurent plus tard. Merci aux progrès de médecine et de nutrition.
Donc on se retrouvera avec une pyramide en forme de gros rectangle avec un petit chapeau au dessus (appelé cylindre pour une raison qui m'échappe).
La part des personnes agées dans la population va passer de 20% à 30-32% en 2035 pour se stabiliser ensuite d'après l'INSEE. Elle devrait baisser légèrement une fois les baby boomers tous partis remplir la pyramide inversée du ciel (c'est beau non ?). Le chapeau devrait se réduire un peu en largeur donc. Mais ça c'est pour après 2060.
Vu que je suis né juste après le baby boom, j'ai globalement tout perdu (je paye et je ne serai pas payé). Sauf si la France se met à faire beaucoup de bébés (l'immigration n'étant pas très tendance en ce moment). Il fait froid, lancez vous !
I gave a three hours course on inverted index to students from Telecom SudParis an engineering school here in... Paris :) It was fun to refresh my knowledge on all the fundamental structures that make Lucene what it is.
I covered quite some ground for this three hours course (a bit to much to be honest). Amongst other things: b-tree, inverted index, how analyzers and filters do most of the magic (synonym, n-gram, phonetic approximation, stemming, etc.), how fuzzy search work in Lucene (state machine based), scoring, log-structured merge and the actual physical representation of a Lucene index and a few of the tricks the Lucene developers came up with. My list of reference link is pretty rich too.
Without further ado, here is the presentation.
I tend to be sparse on my slides so make sure to press
s to see the speaker notes.
The presentation is released under Creative Commons and sources are on GitHub.
It is a first revision and can definitely benefit from a few improvements but there is only so much time per day :)
It is surprisingly hard to find a good explanation to level-based compaction of a Log-Structured Merge Tree. It turns out that it is best explained in LevelDB's documentation. You can find the (html) details here.
This blog post is a collection of key concepts I did not grasp initially. Sort of a mental note for me. It should bring you nicely from standard size-based compaction LSM to level-based compaction.
Levelled LSM structures are useful as they limit greatly the number of files to access when reading a given key.
nbrOfLevel0Files + (n-1) where
n is the number of levels.
You still have levels but level 1 and above have different behaviors:
- there is the in memory level + append only log (non ordered)
- there is level 0 which behaves like a normal LSM level (each file is ordered but has overlapping keys)
- level 1 and above have files containing non overlapping keys
I call segment, a given file at a LSM level (containing ordered keys), it is called sstable usually. Why do I call it segment? I come from Lucene and I have read Dune and know of sandworms. Plus segment is a much nicer word than sstable :)
The non-overlapping ranges for a given level
L are not fixed in stone and are recomputed each time a compaction from level
L to level
Big ahah moment for me.
When a level
L is merged into level
L >= 1), one segment of
L and all overlapping segments of
L+1 are read.
New segments are created at level
L+1 from this data and a new level
L+1 is created from these new segments and the existing non-overlapping ones.
When compaction is done, the manifest (reference of segments) is updated and old segments are deleted.
From that data, new segments at a given level are created based on:
- size (e.g. every 2 MB)
- as soon as the key range of a given segment overlaps with more than 10 segments at
Tombstones are kept around until the last level (to make sure we hide the possibly older values in higher levels). They claim that they remove tombstones for a given key if no higher level has a segment with whose range overlaps the current key but that looks like a minor optimisation.
In LevelDB, the max size of a level is
10^L MB (e.g. 10 MB for 1, 100MB for 2 etc).
Levels do increase in size exponentially though each segment is of fixed size (at least not exploding).
All this compaction only involves sequential reads and sequential writes (when done right).
I'm well aware that many improvements have been built atop this initial approach but they all rely on you understanding this first cornerstone improvement :)
I noticed that the Devoxx France call for paper (CfP) application was influencing my votes. Sneaky one!
Jeff Atwood's tweet made me rethink about something I noticed
When we designed Stack Overflow I intentionally put author at bottom: you should read the actual content before "deciding" credibility
In the CfP, you review a proposal, you first see:
- the title and type of proposal (conference, Tools in Action etc)
- then the abstract
- then the private message to the committee
At this stage, you will have to scroll down to see more, especially if you are reviewing on a tablet like I do. Your brain will make a pre-judgement based on the title and abstract like any attendee. It will later absorb the private message but things are already too late (kind of).
It's only after scrolling that you will see who the person is and what qualification he or she has.
I don't know if Nicolas did it on purpose but it has brilliant side effects:
- You will value good titles and good abstracts over good back channel info
- You will alter your judgment based on the person qualification and fame last. Congnitive research seems to indicate that your reptilian judgement favors the first data much more.
- This is a nice trick to favor subjects over rock stars
What's even more brilliant is that even if I'm concious of this, the trick still works :)
A good presentation is a mix of good subject, good content and good presenter. I do think good presenters are key but besides fame or first hand experience, it is the hardest to judge. What I love about the way the CfP app does it, is that it is a harder process for me to either:
- overcome a just ok proposal from a famous speaker
- overcome a good proposal from a unknown quantity speaker
I still do alter my note of course based on who proposes: that's part of the magic equation. But certainly less than if the name was first.
Now I get how Devoxx France "encourages" new speakers.
There is a lot being written about corporate tax optimisation/evasion these days both in Europe and in the US. This begs for a more general question, why do we tax corporate profits? I had the debate with a friend this summer and led me to research the topic.
There is a very interesting paper on this subject as well as an analysis of the distortion of corporate tax. Here is my summary.
Why (not) a corporate income tax
The main arguments - to me - in favor or against corporate income tax are summarized in the following paragraphs.
A corporation benefits from common infrastructures (highways, social security etc) and thus must pay its due to Society.
However a corporation is (in the end) always owned by individuals which themselves do pay taxes to finance common infrastructures. This is argued by some as leading to a double taxation.
A corporation can be owned by foreign investors: better tax these guys via corporate taxes rather than the folks that actually vote for us.
A corporate tax leads to some sort of pre-tax of the foreign investor by virtue of lesser dividends.
Individuals would feel it to be unfair if they were to pay for all taxes while corporations are making plenty. Of course, individuals do indeed pay it all in the end whether they see it or not but it looks like it's a hard notion to grasp for most.
A corporation bringing the benefits from a foreign subsidiary can deduct from its dividend tax the actual income tax it paid in the subsidiary's country. This essentially erase the foreign income tax assuming the rate is lower than the domestic dividend tax rate. These treaties are here to avoid double taxation and lessen the burden of an income tax.
Without corporate income tax, personal income tax diminishes as individuals find ways to "incorporate" their revenues to avoid taxation.
And of course the cynical view is that governments are addicted to spending and they need more fresh cash than a junkie needs dope (this argument is not in the paper for obvious reasons).
What distortions does it cause
Again this is a personal cherry picking from the paper. What is interesting is that this paper is based mostly on studies of the EU, not the US as it is often the case.
Small companies are often offered lower tax rates, to compensate for market failures. It would be better to use a separate explicit mechanism (e.g. direct aids) to compensate for them. As it is, different tax rate brackets create a disincentive to grow.
A European study shows that the more corporate income tax, the lesser wages are: for every additional euro of corporate income tax, wages are reduced by 0,92 in the long run. Income tax is not good for your salary apparently :)
As explained in the previous section, a corporate income tax lower than the personal income tax leads to a shift from personal to corporate taxes. People (e.g. entrepreneurs) optimise and "incorporate" their income. This is one of the few arguments that encourages a higher income tax.
Income tax influences where an international company opens foreign subsidiaries (a 1% point income tax increase, decreases the change of the subsidiary being opened by 3,96%). Ouch!
Same for foreign investment: a 1% point tax increase, decreases foreign investment by 2.9%.
And finally profit shifting. Profit shifting is what big international companies are accused of these days (Apple, Google, Starbucks, Ikea etc). One study estimates that due to this phenomenon, a 1% point increase in tax rate leads to a loss of 17,2% of the planned extra tax collection. I'm personally skeptical of the averages. We cannot consider this phenomenon by mean nor median: I imagine a company engaging in such activity would do it in an all or nothing fashion.
What's the take away?
Tax is hard, you touch one button and unexpected things move all over the place. Be careful of tax that go to 11, you might become deaf... and sterile ;) More seriously, this paper has been hard to find but knowing about all this will make you a better citizen.
Read the paper, there is a lot more to it
I had to summarize, cherry pick and cut corners to keep this entry short. Go read the paper which is easy to read (except in some specific areas), goes in greater details and cite all of its sources. And above all it is very interesting !
I put a copy of this paper here since it disappeared from its original location. This paper is copyright the European Commission and written by the staff of the European Commission's Directorate-General for Taxation and Customs Union (Gaëtan Nicodème in particular).
The problem is that some of these patches are very very useful. I have created a tap to maintain Mutt with the two key patches I use:
At the time of writing, it uses Mutt 1.5.24 but I might update it. To use the formulae, do:
brew tap emmanuelbernard/mutt brew install emmanuelbernard/mutt/mutt // or alternatively brew install https://raw.githubusercontent.com/emmanuelbernard/homebrew-mutt/master/Formula/mutt.rb
I personally build them with the following options
brew install emmanuelbernard/mutt/mutt --with-sidebar-patch --with-trash-patch --with-gpgme --with-s-lang
s-lang supposedly has better support for color schemes like Solarized.
You can find the code at https://github.com/emmanuelbernard/homebrew-mutt.
Recently running rsync to my Synology diskstation stopped working. I had just changed my default SSH port to a non standard one. Learn how to fix it.
Synology recommends to change the default SSH port to a non standard one. What they forgot to tell you is that it will break your ability to rsync into the machine. Here is a way to fix it.
The workaround way
One way to fix it is to use the
rsync -avz --rsync-path=/usr/syno/bin/rsync from-dir/ synology:/volume1/Backups/to-dir
Since you use a non standard SSH port, make sure to also update your
.ssh/config file to point to the right one.
Host synology User alice Hostname 192.168.2.34 # my synology IP Port 911 # my new SSH port and a nice car
That works around the Synology quirk but it requires to update all your rsync scripts.
The proper way
Log to the web management console and open the
Backup & Replication application.
Backup Services and update the
SSH encryption port to match your new SSH port,
in my example
Note that for some reason the UI forbids certain port numbers.
Make sure to use a non restricted number for your SSH port in the first place or use trial and error.
The SSH port can be changed in the
Terminal & SNMP.
I just learned about the ability to fold in Vim. For mere mortals, it means hiding parts of the file.
Here is a code to put in your
.vimrc to allow folding for Asciidoc(tor) files.
It folds asciidoc files at section boundaries and use nested folds for subsections.
"" Fold Asciidoc files at sections and using nested folds for subsections " compute the folding level function! AsciidocLevel() if getline(v:lnum) =~ '^== .*$' return ">1" endif if getline(v:lnum) =~ '^=== .*$' return ">2" endif if getline(v:lnum) =~ '^==== .*$' return ">3" endif if getline(v:lnum) =~ '^===== .*$' return ">4" endif if getline(v:lnum) =~ '^====== .*$' return ">5" endif if getline(v:lnum) =~ '^======= .*$' return ">6" endif return "=" endfunction " run the folding level method when asciidoc is here autocmd Syntax asciidoc setlocal foldexpr=AsciidocLevel() " enable folding method: expression on asciidoc autocmd Syntax asciidoc setlocal foldmethod=expr " start with text unfolded all the way autocmd BufRead *.adoc normal zR autocmd BufRead *.asciidoc normal zR " TODO following does not work as folding is lost up reloading " autocmd Syntax asciidoc normal zR
I'm sure it can be improved - I'd love to fold blocks as well - but that's a start.
Here are a few commands to remember to fold in Vim
zo: open a fold at cursor
zO: open all folds down at cursor
zc: close a fold at cursor
zC: close all levels of folds at cursor
zA: toggle fold
zm: close fold by one level across the file
zM: close all folds across the file
zr: open fold by one level across the file
zR: open all folds across the file
zk: move to next / previous fold
]z: go to begining / end of the fold
Maven is quite verbose. Finding the useful information when the test fails requires you to squint eyes. Unless, you bring some coloring to the massive Maven output.
The state of color output in Maven is still quite messy. Just look at Arnaud's blog to see how non user friendly that is.
Enter Jean-Christophe and its Maven Color project. The goal is to bring colorized maven console in an easy and cross platform way.
It's relatively easy to install (check the README), and is even easier on Mac OS X
brew tap jcgay/jcgay brew install maven-deluxe
From there you might need to unlink your brew maven install.
Usually, you are done. Well, not if like me you use CheckStyle.
SLF4J beam crossing
Unfortunately for me, it was failing on Hibernate OGM. The problem is that the CheckStyle plugin is compiled with the Maven 2.x version of SLF4J aka an old one.
This leads to funky errors like - in color mind you:
------------------------------------------------------------------------ Failed to execute goal org.apache.maven.plugins:maven-checkstyle-plugin:2.12.1:checkstyle (check-style) on project hibernate-ogm-core: Execution check-style of goal org.apache.maven.plugins:maven-checkstyle-plugin:2.12.1:checkstyle failed: An API incompatibility was encountered while executing org.apache.maven.plugins:maven-checkstyle-plugin:2.12.1:checkstyle: java.lang.NoSuchMethodError: org.slf4j.spi.LocationAwareLogger.log(Lorg/slf4j/Marker;Ljava/lang/String;ILjava/lang/String;Ljava/lang/Throwable;)V ----------------------------------------------------- realm = plugin>org.apache.maven.plugins:maven-checkstyle-plugin:2.12.1 strategy = org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy urls = file:/Users/emmanuel/.m2/repository/org/apache/maven/plugins/maven-checkstyle-plugin/2.12.1/maven-checkstyle-plugin-2.12.1.jar urls = file:/Users/emmanuel/.m2/repository/org/slf4j/slf4j-jdk14/1.5.6/slf4j-jdk14-1.5.6.jar urls = file:/Users/emmanuel/.m2/repository/org/slf4j/jcl-over-slf4j/1.5.6/jcl-over-slf4j-1.5.6.jar urls = file:/Users/emmanuel/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar urls = file:/Users/emmanuel/.m2/repository/org/codehaus/plexus/plexus-interactivity-api/1.0-alpha-4/plexus-interactivity-api-1.0-alpha-4.jar urls = file:/Users/emmanuel/.m2/repository/backport-util-concurrent/backport-util-concurrent/3.1/backport-util-concurrent-3.1.jar urls = file:/Users/emmanuel/.m2/repository/org/sonatype/plexus/plexus-sec-dispatcher/1.3/plexus-sec-dispatcher-1.3.jar urls = file:/Users/emmanuel/.m2/repository/org/sonatype/plexus/plexus-cipher/1.4/plexus-cipher-1.4.jar urls = file:/Users/emmanuel/.m2/repository/org/apache/maven/reporting/maven-reporting-api/3.0/maven-reporting-api-3.0.jar urls = file:/Users/emmanuel/.m2/repository/org/apache/maven/reporting/maven-reporting-impl/2.2/maven-reporting-impl-2.2.jar urls = file:/Users/emmanuel/.m2/repository/org/apache/maven/doxia/doxia-core/1.2/doxia-core-1.2.jar urls = file:/Users/emmanuel/.m2/repository/xerces/xercesImpl/2.9.1/xercesImpl-2.9.1.jar urls = file:/Users/emmanuel/.m2/repository/xml-apis/xml-apis/1.3.04/xml-apis-1.3.04.jar urls = file:/Users/emmanuel/.m2/repository/org/apache/httpcomponents/httpclient/4.0.2/httpclient-4.0.2.jar urls = file:/Users/emmanuel/.m2/repository/org/apache/httpcomponents/httpcore/4.0.1/httpcore-4.0.1.jar urls = file:/Users/emmanuel/.m2/repository/commons-codec/commons-codec/1.3/commons-codec-1.3.jar urls = file:/Users/emmanuel/.m2/repository/commons-validator/commons-validator/1.3.1/commons-validator-1.3.1.jar urls = file:/Users/emmanuel/.m2/repository/commons-beanutils/commons-beanutils/1.7.0/commons-beanutils-1.7.0.jar urls = file:/Users/emmanuel/.m2/repository/commons-digester/commons-digester/1.6/commons-digester-1.6.jar urls = file:/Users/emmanuel/.m2/repository/commons-logging/commons-logging/1.0.4/commons-logging-1.0.4.jar urls = file:/Users/emmanuel/.m2/repository/org/apache/maven/doxia/doxia-sink-api/1.4/doxia-sink-api-1.4.jar urls = file:/Users/emmanuel/.m2/repository/org/apache/maven/doxia/doxia-logging-api/1.4/doxia-logging-api-1.4.jar urls = file:/Users/emmanuel/.m2/repository/org/apache/maven/doxia/doxia-decoration-model/1.4/doxia-decoration-model-1.4.jar urls = file:/Users/emmanuel/.m2/repository/org/apache/maven/doxia/doxia-site-renderer/1.4/doxia-site-renderer-1.4.jar urls = file:/Users/emmanuel/.m2/repository/org/apache/maven/doxia/doxia-module-xhtml/1.4/doxia-module-xhtml-1.4.jar urls = file:/Users/emmanuel/.m2/repository/org/apache/maven/doxia/doxia-module-fml/1.4/doxia-module-fml-1.4.jar urls = file:/Users/emmanuel/.m2/repository/org/codehaus/plexus/plexus-i18n/1.0-beta-7/plexus-i18n-1.0-beta-7.jar urls = file:/Users/emmanuel/.m2/repository/org/apache/velocity/velocity-tools/2.0/velocity-tools-2.0.jar urls = file:/Users/emmanuel/.m2/repository/commons-chain/commons-chain/1.1/commons-chain-1.1.jar urls = file:/Users/emmanuel/.m2/repository/dom4j/dom4j/1.1/dom4j-1.1.jar urls = file:/Users/emmanuel/.m2/repository/sslext/sslext/1.2-0/sslext-1.2-0.jar urls = file:/Users/emmanuel/.m2/repository/org/apache/struts/struts-core/1.3.8/struts-core-1.3.8.jar urls = file:/Users/emmanuel/.m2/repository/org/apache/struts/struts-taglib/1.3.8/struts-taglib-1.3.8.jar urls = file:/Users/emmanuel/.m2/repository/org/apache/struts/struts-tiles/1.3.8/struts-tiles-1.3.8.jar urls = file:/Users/emmanuel/.m2/repository/org/apache/maven/shared/maven-doxia-tools/1.4/maven-doxia-tools-1.4.jar urls = file:/Users/emmanuel/.m2/repository/commons-io/commons-io/1.4/commons-io-1.4.jar urls = file:/Users/emmanuel/.m2/repository/junit/junit/3.8.1/junit-3.8.1.jar urls = file:/Users/emmanuel/.m2/repository/org/codehaus/plexus/plexus-component-annotations/1.5.5/plexus-component-annotations-1.5.5.jar urls = file:/Users/emmanuel/.m2/repository/org/codehaus/plexus/plexus-resources/1.0-alpha-7/plexus-resources-1.0-alpha-7.jar urls = file:/Users/emmanuel/.m2/repository/org/codehaus/plexus/plexus-utils/3.0.15/plexus-utils-3.0.15.jar urls = file:/Users/emmanuel/.m2/repository/org/codehaus/plexus/plexus-interpolation/1.19/plexus-interpolation-1.19.jar urls = file:/Users/emmanuel/.m2/repository/org/codehaus/plexus/plexus-velocity/1.1.8/plexus-velocity-1.1.8.jar urls = file:/Users/emmanuel/.m2/repository/com/puppycrawl/tools/checkstyle/5.7/checkstyle-5.7.jar urls = file:/Users/emmanuel/.m2/repository/antlr/antlr/2.7.7/antlr-2.7.7.jar urls = file:/Users/emmanuel/.m2/repository/commons-beanutils/commons-beanutils-core/1.8.3/commons-beanutils-core-1.8.3.jar urls = file:/Users/emmanuel/.m2/repository/com/google/guava/guava-jdk5/14.0.1/guava-jdk5-14.0.1.jar urls = file:/Users/emmanuel/.m2/repository/org/apache/velocity/velocity/1.5/velocity-1.5.jar urls = file:/Users/emmanuel/.m2/repository/commons-lang/commons-lang/2.1/commons-lang-2.1.jar urls = file:/Users/emmanuel/.m2/repository/oro/oro/2.0.8/oro-2.0.8.jar urls = file:/Users/emmanuel/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar Number of foreign imports: 1 import: Entry[import from realm ClassRealm[maven.api, parent: null]] ----------------------------------------------------- -> [Help 1]
There is a relatively easy fix. You can force the SLF4J version of CheckStyle in your plugin dependencies.
<plugin> <artifactId>maven-checkstyle-plugin</artifactId> <version>2.15</version> <dependencies> <dependency> <groupId>org.slf4j</groupId> <artifactId>jcl-over-slf4j</artifactId> <version>1.7.5</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-jdk14</artifactId> <version>1.7.5</version> </dependency> </dependencies> </plugin>
Now I can get spanked by CheckStyle in color!
If like me you drawn in Maven outputs, go give a try to Maven Color. And many thanks to Jean-Christophe for his help in solving the CheckStyle of death problem.