I have written about Microsoft Teams before (in German), how horrible a user experience it is and so on. Let me tell you, it hasn’t gotten any better. Only “newer”.

Remember, Microsoft loves Linux now. Right? Or so people, including apparently MS itself, keep bullshitting around all the time all the while WSL is a horrendous trap — albeit a technically interesting one (especially v1) — to ensnare Linux-curious devs on Windows. They love Linux so much that less than 18 months (2022-11-07) ago they announced that they were going to throw out the — by that time already utterly outdated1 — Linux Teams app and pushed Linux users to the so-called PWA2. 🤮

And now we’re plagued with the “New Teams” (Microsoft’s PR-lingo) on Windows and literally no option left on Linux. Great, Microsoft loves Linux, right?! 🤦

What the flying F, Microsoft? Really? Why? 🖕

Microsoft makes it deliberately nigh-impossible to continue using the PWA which they — no 18 months ago — shoved down the throats of everyone who wasn’t using Windows.

Screenshot of modal dialog preventing any action in the Teams PWA

Never mind that Teams has gotten progressively3 crappier over the years. Microsoft hadn’t even managed to get it to work the same in the — non-Microsoft — browser as in the “old” app4 the same way. Heck, they didn’t even manage that the desktop app and the PWA in Edge, Chrome and Firefox worked consistently compared to each other. A feature working or not was was good as the lottery, just felt like even fewer winning tickets.

And now that the “New Teams” is being shoved down the throats of millions of involuntary5 users, based on the Edge WebView2 which — surprise surprise — is nothing other than yet another Chromium-based “foundation”6, just like Electron was in “old” Teams.

Once you install the Teams desktop application, it appears, approximately 70% of your CPU resources and at least several GiB of RAM are being reserved for it. It’s an incredible resource hog indeed. It’s either running a build or attending that video call. Pick either one.

I sincerely hope that the anti-trust authorities step in as soon as possible to put an end to this. Although arguably it is probably too late by now. Alternative solutions7 have been all but pushed out of the market by Teams. The fact that many companies struggled to accommodate the home office workers starting in 2022, helped Teams as the apparent “gratis” solution to become the de facto standard. Competitors have been hampered by the loss of of income from the potential customers that ended up using “gratis” Teams. But even the push for “Teams Premium” starting last years doesn’t seem to have hampered Teams’ conquest.

// Oliver

PS: use the following filters from the “My Filters” tab in uBlock₀ on a browser that allows uBlock₀ to work to its full potential:

  1. to the best of my knowledge it had never even left beta status … mind you, this was the same code base with Electron and the rest on the server side! []
  2. progressive web app, aka teams.microsoft.com []
  3. ah, there we go with the progressive in PWA! []
  4. which is just an embellished browser engine anyway … []
  5. because the crap is mandated by their respective employers []
  6. arguably in the spirit of what IE used to be with the IE web view being available to third-party applications []
  7. and in many aspects better ones []
Posted in Linux, Software | Tagged , | Leave a comment

Migrating data from 2 TB SSD to 4 TB SSD with iODD ST400 drive enclosure

Linux is my main system, but I prefer using NTFS for various use cases and in fact some use cases require something like NTFS.

The ST400 is the successor of several Zalman-rebranded iODD devices which bring a similar feature set. The main selling point: store your ISO files, VHDs and what not on an NTFS, FAT32 or exFAT drive and mount them in a way that makes the drive enclosure pose as an optical drive (CD/DVD …).

I had so far used the older Zalman-branded and iODD-branded drive enclosures with up to 2 TB hard drives and SSDs. I also have an iODD mini with 512 GB and now wanted to upgrade the ST400 from a 2 TB SSD to 4 TB.

So the first thing I did was generate the two partitions I wanted on the drive, then copy the data over from the old one using rsync. Nothing spectacular here.

Then, after moving the new bigger SSD into the ST400 enclosure, the enclosure would report “No supported partition” with the 2.74.4 firmware. Dang.

Well, so I thought this could be remedied by converting to GPT from MBR. After all the size is known to create boot problems, because of start sectors being beyond the addressable range. Alas, I don’t want to boot from the drive itself (in its function as HDD/SSD). Anyway, gdisk /dev/sdX will basically do the whole job swiftly, if you write (w) the converted GPT it automatically creates from the MBR partition table. I did a backup of the first 2 MiB of the disk using dd in order to recover from a possible botched conversion1. But all went well. A quick partprobe /dev/sdX as superuser made the changes available.

I also had to do some shuffling of the partition sizes, since the iODD ST400 manual states:

At the first time, automatically finds mountable files on the largest partition (GPT / MBR, NTFS / exFAT / FAT32)

… and I needed to accommodate that. The outcome was this:

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048      4294967295   2.0 TiB     8300  Linux filesystem
   2      4294967296      8001509375   1.7 TiB     8300  Linux filesystem

Notice something? For starters of course the device kept complaining about “No supported partition”, but also it says Linux filesystem. What the heck? Well 8300 is Linux, I get that. But why did it pick that in the first place?

Turns out gdisk does that independent of the file system on the actual partition, so I neede gdisk again to switch to type 0700 (Microsoft basic data). And lo and behold that did the trick. After syncing, disconnecting and reconnecting the firmware on the ST400 was able to recognize the larger partition and is able to list the files and folders on it as it did with the 2 TB drive.

So success.

// Oliver

PS: the only issue I had was gparted erroring out on me with a mysterious error. Given the whole resizing process ran for 12 hours or so and I didn’t attend it throughout, it was quite annoying to see an error and no indication of where it failed. Fortunately going by the timestamp the relevant resizing should have been done and looking at the disk confirmed it. After checking the integrity of the partitions, I resized the remaining one and was finally done. Error was:

$ sudo gparted /dev/sdh
GParted 1.4.0
configuration --enable-libparted-dmraid --enable-online-resize
libparted 3.3

(gpartedbin:55435): glibmm-ERROR **: 20:22:47.643:
unhandled exception (type std::exception) in signal handler:
what: basic_string::_M_replace_aux

Trace/breakpoint trap
  1. Side-note: this isn’t about backups, this is about speed. Copying huge amounts of data back and forth takes vast amounts of time and wears down the SSD, albeit slowly. []
Posted in Administration, EN, Linux | Tagged , | 7 Comments

Two more useful flags for cl.exe

/Be appears to spit out a make file1 snippet that contains the recipe to reproduce a given run of cl.exe. It takes into account variables.

Check it out:

        @cd D:\17.7.5\x64
        @set INCLUDE=
        @set LIB=
        @set LIBPATH=
        @set CL=/nologo /utf-8
        @set _CL_=-permissive -nologo
        @set LINK=
        D:\17.7.5\x64\cl.exe /nologo /BE /Be /?

As you can see it takes care of changing into the directory, setting the various recognized environment variables and copying stuff from those that were set (CL and _CL_ in my case) and then invoking the same command line that I invoked.

Another useful switch appears to be /Bv which shows the versions of the binaries involved like so:

cl.exe /nologo /Bv
Compiler Passes:
 D:\17.7.5\x64\cl.exe:        Version 19.37.32825.0
 D:\17.7.5\x64\c1.dll:        Version 19.37.32825.0
 D:\17.7.5\x64\c1xx.dll:      Version 19.37.32825.0
 D:\17.7.5\x64\c2.dll:        Version 19.37.32825.0
 D:\17.7.5\x64\c1xx.dll:      Version 19.37.32825.0
 D:\17.7.5\x64\link.exe:      Version 14.37.32825.0
 D:\17.7.5\x64\mspdb140.dll:  Version 14.37.32825.0
 D:\17.7.5\x64\1033\clui.dll: Version 19.37.32825.0

cl : Command line error D8003 : missing source filename

// Oliver

  1. arguably NMake flavored []
Posted in C/C++, Reversing, Software | Tagged | Leave a comment

(New) shittiest software from Microsoft in my book

Previously the so-called Office and especially Teams were ranking quite high among the shittiest software from Microsoft in my book. In fact Teams in all its incarnations is probably going to take up the four rear slots in my top five shittiest software list for as long as I have to use this crap.

Well, right now robocopy took this the first place, however. It just wiped a whole folder of ISO files clean when I told it — or rather that’s what I thought I had told it — to mirror folder A to location B\.

This shit piece of software wiped the whole of B and it did so prior to commencing the copying of the new content. What the flying eff? …

Common sense tells me copycommand A B\ means put A into B. However, robocopy knows that I meant to delete everything up front and then copy some stuff there, but not into it but instead the contents of A into B\.

Intuitive. Big time!

// Oliver

PS: I have backups and several GiB of the ISOs were only downloaded yesterday from my.visualstudio.com. So the annoying part is that this costs extra time and bandwidth now …

Posted in /dev/null, Software | Tagged | Leave a comment

NATO’s open door policy

Now, while any small town club is able to reject applications for membership and scholarships are tied to preconditions — and ignoring for a minute that NATO even refused to talk about Russia’s security interests, including its unwillingness to accept NATO right on its borders1, in November/December 2021 — NATO has maintained that its open door policy essentially keeps it from outright rejecting Ukraine’s attempts at joining the “alliance”.

Curiously though, NATO’s open door policy either wasn’t a thing back in the early nineteen-fifties, shortly after it was founded2, or the door isn’t quite as open as NATO strategic communications — a neologism for propaganda — would make us believe.

Not only did the USSR — aka Soviet Union — of which Russia eventually became the sole successor in terms of international law3 apply to NATO in 1954, one year after Stalin’s death; nope, Russia did again according to the account of George Robertson. Although perhaps the term “apply” is a stretch here, given the form it is alleged to have had. That is, Putin allegedly said he didn’t want Russia to wait in line with “countries that don’t matter”.

Still, it turns out that, in fact, NATO doesn’t have an open door policy.

Just like Putin reached out to Germany, to the West, exactly two weeks after 9/11 and tried again, but slowly realizing that Russians were not welcome by “the West”.

Well, one should not be surprised, since in the words of NATO’s first secretary general the purpose NATO’s creation always has been to “keep the Soviet Union out, the Americans in, and the Germans down.”4

The supposed open door policy seems more like a ruse to get Ukraine to fight for NATO’s interests to the last Ukrainian. After all, let’s not forget that Saakashvili, mistaking the outcome of the NATO summit in 2008 for something it wasn’t, attacked South Ossetia and got rebuffed by Russia. A fact that is these days often distorted into “Russia attacked Georgia”, despite the findings of a EU-sponsored study which found the opposite: i.e. Georgia attacked Russia.

// Oliver

  1. Imagine the scenario with tables turned! []
  2. and before the Warsaw Pact got founded! []
  3. including taking over debt service! []
  4. Anyone wondering why Germany is in NATO at all? []
Posted in EN, Opinion, Thoughts | Tagged , | Leave a comment

Undocumented MSVC

Some ongoing research. For obvious reasons I can only share results and tools, but not actual sample data.

Posted in EN, Reversing, Software | Tagged , , , | Leave a comment

Log build command lines with cl.exe, link.exe and friends

Turns out you can enable detailed logging of the command lines run by MSBuild when building from Visual Studio or the command line.

This may not seem like much, until you realize that technically you rarely get to see the actual command lines executed, from the logs. That’s because of response files. These are files containing the command line arguments, one per line, and passed as @FILENAME. This trick even logs those command lines that end up being written into response files.

An environment variable named LOG_BUILD_COMMANDLINES can be set to the path of a file into which to log the build command lines. As far as I can tell the containing directory ought to exist.

I have done this simply in a Directory.Build.props for one of my pet projects, so you can have a look there. Alternatively observe the trick (again, this ought to go into a Directory.Build.props):

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="Current" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" InitialTargets="LogBuild">
  <Target Name="LogBuild" BeforeTargets="SetUserMacroEnvironmentVariables;SetBuildDefaultEnvironmentVariables">
    <SetEnv Name="LOG_BUILD_COMMANDLINES" Value="$(ThisProjectBuildLogFileName)" Prefix="false" />

This logs the build command lines into the directory in which the Directory.Build.props resides, under a name BuildCommandLines.log.

SetEnv isn’t well-documented, in my opinion, but you can simply leave off the Target attribute and it’ll default to the current (MSBuild) process and any commands invoked from that will inherit it.

How did I find out? I found out by investigating link.exe and other MSVC toolchain binaries1. However, turns out there was prior art over here (earliest archived link).

This environment variable seems to have this effect at the very least with cl.exe and link.exe, but it stands to reason that other related tools also use it.



  1. more on this in a subsequent blog post []
Posted in C/C++ | Tagged , , | Leave a comment

Jetzt wird Farbe bekannt

Mit der Ankündigung Großbritanniens panzerbrechende Munition mit abgereichertem Uran (Englisch “depleted uranium”) an die Ukraine zu liefern — und den quasi nicht existenten medialen Einsprüchen — haben die “Unterstützer” der Kiewer Regierung unter Selenskij Farbe bekannt.

Behauptete man bisher noch es ginge bei Waffenlieferungen um die Ukraine und die Ukrainer und mochte man vielleicht auch der verworrenen “Logik” folgen mit Waffen einen Frieden zu schaffen1, während andererseits nachweislich die Friedensbemühungen der beiden offiziellen Konfliktparteien — Ukraine und Rußland — wenige Wochen nach dem Angriff Rußlands seitens des Westens hintertrieben wurden2.

Mit der Lieferung von Munition mit abgereichertem Uran ändert sich die Lage gewaltig. Hierzu sei an die Folgen der “Nutzung” dieser Munition (Artikel auf Englisch) erinnert. Schon vor knapp zwei Monaten munkelte man, daß die USA planten diese Sorte Munition zu liefern. Jetzt legt Großbritannien vor, während sich die USA offiziell noch zieren. Dabei ist die Ukraine doch eine so wunderbare Endlagerstätte für den Atommüll dieser beiden Atommächte!

Die “Clou” bei dieser Munition aus einer Uran-Titan-Legierung ist, daß sie auch starke Panzerungen — wie eben bei Kampfpanzern — mühelos durchschlagen kann. Die Russen verwenden stattdessen Wolfram (Englisch: Tungsten), der als Munition aber nicht ganz die Durchschlagskraft der Munition mit abgereichterem Uran erreicht. Ein anderer “Vorteil” ist, daß man seinen Atommüll so kostengünstig einer Nachnutzung zuführt. Der klare Nachteil ist, daß sich die Munition beim Durchdringen der Panzerung in feinen Staub zerlegt und sich in der Folge sowohl mit dem Wind verbreitet als auch im Erdreich ablagert. Und das in der Kornkammer Europas!

Während die Stäube angeblich kaum oder nicht radioaktiv sind, sind die freigesetzten Uranverbindungen umso giftiger. Nochmal zusammengefaßt aus dem englischsprachigen Artikel den ich oben weiter verlinkte: in Falludscha kam es infolge der “Nutzung” dieser Sorte Munition zu einer Vervierfachung über alle Krebssorten hinweg und einer Verzwölffachung von Krebssorten im Kindesalter3. Der Anstieg bei Leukämie allein war 32-fach, bei Brustkrebs 10-fach — als Vergleich wird im Artikel Hiroschima genannt, wo der Anstieg von Leukämie “nur” 17-fach war.

Hier wird also mit dem Leben der Bevölkerung in den umkämpften Gebieten, sowie — aber das ja ohnehin — der Soldaten gespielt. In diesem Rahmen weiter auf einen Sieg der Ukraine durch Waffenlieferungen zu pochen ist nichts weniger als zynisch. Denn angenommen der hypothetische Fall — daß die Ukraine wirklich in der Lage wäre die durch Rußland vereinnahmten Gebiete zurückzuerobern — sie würde mit Uranverbindungen verseuchten Boden zurückerobern. Nähme man eine langwierige Dekontaminierung an, bleibt die Frage ob die gepriesene ukrainische Schwarzerde ihre Wirkung behielte — ganz zu schweigen davon ob das auf Jahrzehnte verschuldete Land4 in der Lage wäre die Früchte der nicht dekontaminierten Schwarzerde außer Landes gewinnbringend abzusetzen.

Auch die ehemaligen Jugoslawen können ein Lied von der fatalen Wirkung dieser Munition singen. Das Verbrechen des Einsatzes dieser Munition wird nicht dadurch kleiner, daß irgendeine bekloppte englische Adlige aus deren Kriegsministerium verlautbart, die britische Armee setze diese Munition bereits seit Jahrzehnten ein. Klar, Asbest wurde auch jahrzehntelang eingesetzt. Aber wenn es nicht im eigenen britischen Vorgarten passiert, sondern irgendwo in Osteuropa kann einem das ja schließlich scheißegal sein.

Der Wertewesten hat damit einmal mehr Farbe bekannt und zeigt wie sehr ihm die Ukrainer5 am Herzen liegen. Es wird nicht nur bis zum letzten Ukrainer gekämpft und das ohnehin arme Land weiter verschuldet — nein, die Zukunft der Menschen und der Erde auf der sie leben wird auch nachhaltig auf Generationen zerstört. So geht Frieden! Da kann ein vermeintlicher Diktatfrieden von Putins Gnaden natürlich nicht mithalten.

// Oliver

  1. in einem Krieg der ohne NATO-seitige Provokationen nie zustandegekommen wäre []
  2. Siehe Artikel des Tagesspiegel []
  3. also bspw. Leukämie im Alter unter 14 Jahre []
  4. siehe “Hilfspakete” in Form von Darlehen []
  5. natürlich nur jene die auch der russischsprachig aufgewachsene Selenskij aktuell als Ukrainer anerkennt []
Posted in DE, Meinung, Wertewesten | Tagged , , | 1 Comment

Geschichte und Geschichten

Ist schon faszinierend daß die Kubakrise häufig zum Vergleich mit der aktuellen Weltlage — mithin der Beziehung zwischen Rußland und den USA — bemüht wird, jedoch regelmäßig auch hier die “Krise” entsprechend der US-amerikanischen Lesart mit der Stationierung von Raketen in Kuba beginnt.

Dabei wird die Vorgeschichte, wie auch aktuell, komplett ausgeklammert. Es waren damit USA und NATO welche nuklear bestückte Mittelstreckenraketen direkt an den Grenzen zur Sowjetunion stationierten.

Auch so eine Zeitenwende, scheint’s. Bei Zeitenwenden gibt es ja generell nur ein Nachher. Und beim aktuellen offiziellen Geschichtsrevisionismus von orwellschem Ausmaß, ist es nur eine Frage der Zeit bis Rußland der Zweite Weltkrieg angelastet wird. Zumindest für die Grüne Jugend München ist schon einmal klar, daß die “Operation Barbarossa”1 1941 der Höhepunkt russischen Kolonialstrebens war:

Russland wollte ab der zweiten Hälfte des 19. Jahrhunderts in die “Riege der europäischen Großmächte” aufsteigen. Das große russische Reich konnte seine damalige Größe nur durch Siedlungseroberung erreichen, wobei die Expansion nicht Übersee (sic!) sondern auf den Norden, asiatische Nachbarländer und die indigene Bevölkerung im Süden abzielte. Den damaligen Höhepunkt stellte 1941 die “Operation Barbarossa” dar.

GJ München auf Twitter (verlinkt ist Nitter, mittlerweile gelöscht)

// Oliver

  1. gemeint ist das Unternehmen Barbarossa; vermutlich hatte man sich aus einer olivgrünen englischsprachigen Quelle im Internet statt aus Geschichtsbüchern informiert []
Posted in DE, Gedanken, Meinung | Tagged , , , | Leave a comment

Aiding reproducibility in builds with MS Visual C++

<AdditionalOptions>%(AdditionalOptions) /d1trimfile:"$(SolutionDir)\"</AdditionalOptions>

In your .vcxproj file or a Directory.Build.props when passed to the compiler (cl.exe, ClCompile) this should trim the leading path used for __FILE__. The backslash is actually required here, because SolutionDir ends in a backslash itself, but we do not want to escape the double closing quote, i.e. the backslash in SolutionDir is essentially escaping the backslash we give, because otherwise the single backslash in the expanded version of the command line would wreak havoc.

GCC and Clang appear to have __FILE_NAME__. However, it should be noted that this expands only to the last component (i.e. past the last path separator). This may be desirable, but I find Microsoft’s idea a little more convincing in this case.

Additionally you could pass /Brepro to cl.exe and link.exe

Another good one is passing /pdbaltpath:%_PDB% to link.exe to cause it to leave out the full path to the .pdb file, i.e. only the file name itself will be recorded in the build artifact. Note, however, if you are copying around your resulting DLLs and executables, for example, and you don’t use a symbol store which you populate post-build, chances are that the debugger won’t find your debug symbols files. One way to get around this is to copy the .pdb files alongside the binaries or use a symbol store1, as is customary.

// Oliver

PS: here’s another blog article about the subject matter, leading to even further resources.
PPS: this comment on GitHub also provides some further details, including two other options: /experimental:deterministic (to warn about problematic code) and /d1nodatetime (which, according to the comment is implied by /Brepro).

  1. using the symstore tool []
Posted in C/C++, Programming, Software | Tagged | Leave a comment

Initialization of static variables (reminder)

Nice blog article which I ran across again recently: gynvael.coldwind.pl/?id=406

PS: probably also worth a look: Paged Out

Posted in C/C++, Uncategorized | Leave a comment

FIDO2 für Kreditkarte (Sparkasse). Aber nicht mit Linux!

Im letzten Jahr hatte ich eine Kreditkarte bei der Sparkasse beantragt — Mastercard war das einzige was im Angebot war, aber gut.

Also beantragt und direkt nach Erhalt einmal benutzt. Schon der zweite Versuch ging in die Hose, da aufgrund von Vorschriften eine Form von MFA1 zum Einsatz kommen müsse. Ich möge doch bitte die S-ID-Check App der Sparkasse auf meinem Androidgerät installieren. Gesagt getan. Aber huch, die App verweigerte den Dienst, denn mein Androidgerät ist aus Privatsphäre- und Sicherheitsgründen2 gerootet. Also schnell angefragt bei der Sparkassenberaterin. Ja, es gäbe da noch die Option auf die Nutzung von FIDO2. Ah, super … für mich der erste praktische Anwendungsfall privat, also schnell vom empfohlenen Sparkassen-Shop ein Feitian ePass Fido2 A4B bestellt.

Das war nach ein paar Tagen auch da. Jetzt wurde es schon erstmals bizarr. Bei meiner Sparkasse auf der Webseite gab es keinerlei Dokumentation zum Vorgehen, bei irgendeiner Sparkasse aus Friesland dann hingegen schon. Also hin zu www.online-zahlen-mit-fido.de und Registrierung gestartet.

Und das war es dann, was man als Linuxnutzer zu sehen bekam:

Registrierung abgebrochen

Das Freche daran: mein Browser unterstützt sehr wohl die Nutzung von FIDO2 von Linux aus3 und die Beschränkung ist wohl ganz einfach eine künstlich geschaffene der PLUSCARD Service-Gesellschaft für Kreditkarten-Processing mbH, bei der man die Registrierung vornehmen soll.

Auf Anfrage kam folgende erhellende Antwort:

Die wichtigste Information zuerst. Der FIDO Token ist in unserem Haus nur mit Windows 10 und macOS (Big Sur) und höher kompatibel.
Alle anderen Betriebssysteme sind von diesem Bezahlverfahren leider ausgeschlossen.

Leider steht da nicht: “Windows 10 und macOS (Big Sur) und besser“, sonst hätte man ja noch diskutieren können 😉

Zu Ihrer zweiten Frage, warum die App nicht auf einem gerooteten Handy genutzt werden kann.
Die Installation der App S-ID-Check kann generell nicht auf gerooteten Geräten durchgeführt werden. Grund hierfür sind die Vorgaben der Bundesanstalt für Finanzdienstleistungsaufsicht (BaFin). Diese besagen, dass verschiedene Sicherheitsvorkehrungen zu treffen sind, wenn der Einkauf als auch die Legitimation der Zahlung über das gleiche Gerät erfolgen.

Die Annahme ich wolle auf dem gleichen Endgerät auf dem ich diese S-ID-App einsetze auch Käufe vornehmen ist zumindest eine gewagte. Wäre mir bis dahin nicht in den Sinn gekommen, aber ist jenes Szenario was verhindert werden muß.

Und weiter:

In diesem Fall muss verhindert werden, dass das Original-Betriebssystem verändert wurde. Dies hat den Ausschluss von gerooteten Geräten zur Folge. Auch bei vergleichbaren Apps am Markt, mit welchen Zahlungen ausgelöst werden können oder auch bei Online-Banking Apps ist diese Konfiguration zu beobachten. Ein Rooting wird in der Regel immer ausgeschlossen.

Das stimmt wohl, auch bei meiner isländischen Bank habe ich bei deren App das Problem. Fazit scheint mir: Rooten == generell böse und unsicher.

Es stellte sich übrigens auch heraus, daß die FAQ der Firma in der Tat die Nutzung von FIDO4. Der Wortlaut aus der FAQ (bis zum Tag der Veröffentlichung dieses Beitrags):

FIDO ist mit Windows (ab Windows 10) und macOS (ab Big Sur) nutzbar.

FAQ: nur Windows und macOS

Habe dann einen alternativen Wortlaut vorgeschlagen 😉:

Nutzen Sie ein anderes Betriebssystem als Windows (ab Windows 10) oder macOS (ab Big Sur), verweigern wir Ihnen die Registrierung und Aktivierung Ihres FIDO2-Tokens, sowie dessen Nutzung.

Ebenfalls hatte ich noch geschrieben:

Lassen Sie mich gern wissen, sobald auch Ihre Firma die “Betriebssystem-Apartheid” abgeschafft hat, die ich aufgrund des Plattform-Charakters bei browserbasierten Technologien seit Jahren für Geschichte hielt.

Die Kreditkarte wurde dann nach einmaliger Nutzung seitens der Sparkasse rückabgewickelt. Nächster Versuch wird eventuell dieses Jahr eine Visa-Karte, da die offenbar als Ersatz für die maestro-Karten ab diesem Jahr ausgegeben werden sollen.

Ist schon ein starkes Stück, daß man aufgrund der Nutzung eines bestimmten Betriebssystems einfach ausgeschlossen wird.

// Oliver

  1. Multi-Faktor-Authentifizierung []
  2. Ja, Sicherheitsgründe, denn ohne Rooten kann man diverse fragwürdige Konstrukte die einem mitgeliefert werden nicht deaktivieren; ADB reicht einfach nicht für alles. []
  3. Es gibt auch diverse Webseiten auf denen man das direkt aus dem Browser heraus prüfen kann. []
  4. Gemeint ist dort immer FIDO2, obwohl die mißverständliche Nutzung von “FIDO” auch dazu führen könnte zu meinen es sei von FIDO U2F die Rede … []
Posted in DE, IT Security, Meinung | Tagged , | 7 Comments

Floating point precision … printf-VS2013-vs.-later-VS-version edition

As developers we probably all know that floating point precision can be an issue1. It can haunt us in various ways.

Generally when we talk about precision, though, we probably don’t have in mind printf as the first thing. This blog post is about a particular change from Visual Studio 2015, which caused some hassle — and how to work around it. It’s more about the formatting than actual precision, but the first thing that comes to mind here would be precision, which is why I chose it for the title.

It is the issue also presented in this forum thread and the relevant excerpt from the change announcement on the VS blog reads: Continue reading

  1. Since I like the writing style, let me recommend this article and this article by Bruce Dawson; you can find other awesome stuff on his blog, including references to useful tools and explanations of difficult to track down defects he has dealt with … []
Posted in C/C++, EN, Programming | Tagged , , | Leave a comment

Enabling RSA (with SHA-1) again in OpenSSH server

The sshd version that ships with Ubuntu 22.04 seems to have abandoned RSA authentication. Well, that’s not true. It’s about the hash algorithm used by the “old” protocol by the name ssh-rsa, which is deemed insecure by today’s standards. RSA is alive and kicking inside the protocols going by the names rsa-sha2–256 and rsa-sha2–512.

Either way, that caused an immediate issue with my favorite file manager on Windows: SpeedCommander1.

Anyway, the solution was to enable a protocol on the server side (in my case a VM) that was understood by the client, i.e. SpeedCommander. Thus I added in /etc/ssh/sshd_config:


… restarted sshd and was happily churning on.

// Oliver

PS: I have no qualms about the use case, because it’s a VM to which I locally connect. For other use cases I would probably resort to other solutions. But then: my main system at home runs Linux, not Windows 😉

  1. I know some people prefer TotalCommander — but I never was much into totalitarian software 😉 and it really couldn’t deal with Unicode and long paths for a very long time — or Far Manager. That’s okay. I won’t judge. Or maybe I will, but won’t tell you the judgment 😁 []
Posted in EN, Software | Tagged | Leave a comment

Bash training I gave some years ago

This is a Bash training I gave some years ago, which I had — however — prepared on my own time.

Some parts may be outdated. Others may need some touching up, but in general I think it can be valuable for others.

I license it under CC0.

Posted in Bash, EN | Tagged | Leave a comment

That trick I learned with the Visual Studio debugger

Alright, I’ll admit it it: I am in team WinDbg. Sure, I’ll happily use WinDbgX — the “Preview” version of the “new” WinDbg which has been in preview for ages now — but I always was a bit unhappy with the facilities that Visual Studio had to offer.

Lately I was helping debugging an issue in the Visual C/C++ runtime (“MSVCRT”) and we were wondering which exact Win32 status had been reported under the hood. Unfortunately by remapping the Win32 status codes to errno_t some information may get lost.

So I thought to myself: “Well, I know this one! The TEB1 holds the last Win32 status, which is what GetLastError() queries.” … so despite my disdain for the VS debugger, I thought I’d be able to guide someone else through using the pseudovariable $tib2 to look at TEB::LastErrorValue. Alas, when I tried it already failed at the first step: identifier “_TEB” is undefined. Oh my.

The immediate rescue came from someone else, who suggested that we should be able to set a watch with the value GetLastError() to get to the Win32 status code. Adding another as GetLastError(),hr even makes it human-readable, just like the modifier x will cause values to be shown in hex:

Watch window inside Visual Studio showing failed attempts

But the next time around I needed to know the last NT status code. And while that also resides in the TEB as TEB::LastStatusValue, it’s even more cumbersome to get to. But either way, GetLastError() wasn’t going to cut it.

So back to the drawing board. But not for long.

Although I also had initially tried qualifying the name of the module by prepending it separated with an exclamation point — (nt!_TEB*)$tib — just the way I knew from WinDbg, I only ever received: Module “nt” not found.. But that seems to be a condition different from identifier … is undefined. And then I had the epiphany. Probably the debug symbols containing _TEB and _PEB and friends where simply not loaded.

Watch window inside Visual Studio showing module not found error

And sure enough I noticed that I had picked — for performance reasons — “Load only specified modules” within VS. Telling it to load the symbols for ntdll.dll and kernel32.dll was my course of action:

Dialog: Symbols to load automatically with Visual Studio Options dialog in background

Furthermore it turned out that — contrary to what I was used from WinDbg3nt wasn’t a valid module name. So fair enough, I tried with ntdll.

And sure enough it worked!

Watch window inside Visual Studio showing the first successful attempt to cast $tib to ntdll!_TEB

… and as you can see, it can even expand the variable and peek into it.

Consequently the next step was natural:

  • Last Win32 status: ((ntdll!_TEB*)$tib)->>LastErrorValue,hr
  • Last NT status: ((ntdll!_TEB*)$tib)->>LastStatusValue,hr


Watch window inside Visual Studio showing TEB::LastErrorValue and TEB::LastStatusValue

The nice thing is, since we can rely on the matching debug symbols, this should work reliably4.

If you wanted to be really “hardcore” you could use something like these to tap into the aforementioned structs without symbols:

  • *(int*)($tib+(sizeof(void*) == 8 ? 0x68 : 0x34)),hr
  • *(int*)($tib+(sizeof(void*) == 8 ? 0x1250 : 0x0bf4)),hr

Watch window inside Visual Studio showing TEB::LastErrorValue and TEB::LastStatusValue without loaded/available debug symbols

Hope this will prove useful to someone.

// Oliver

  1. Thread Environment Block []
  2. Thread Information Block: _NT_TIB []
  3. where nt can stand in as module name for either the current kernel or ntdll []
  4. … unlike the layout of those structs from Terminus Project which may or may not be correct on any given system []
Posted in C/C++, EN, Programming | Tagged , | Leave a comment

IDA and Hex-Rays decompiler keyboard shortcut cheat sheet

Find it on GitHub: assarbad/some-latex/releases/tag/v1.0-ida-cheat-sheet

LaTeX source can be found in the repository itself.

Posted in EN, Reversing | Tagged , | Leave a comment

Reminder to self: IDA load all sections

Just a reminder to myself. Edit cfg/pe.cfg inside the IDA installation folder to configure the PE loader to load all sections:

// Always load all sections of a PE file?
// If no, sections like .reloc and .rsrc are skipped


This will load the PE header as well as the resource section into the database.

Posted in EN, Reversing, Software | Tagged | Leave a comment

ASR rule “Block Win32 API calls from Office macros”

Microsoft says it’s fixed. It may be, but I think there’s more to it than meets the eye.

Colleagues of mine noticed that, aside from shortcuts disappearing, Defender also started acting up on TortoiseProc.exe from TortoiseSVN. Notably, checkouts would fail and files would be “caught” (and reported) by Defender. Not only that, but once the rule had been set to audit as an immediate workaround, the problem stopped.

That raised an eyebrow, I have to admit.

But let’s first take the facts we know apart a bit. The rule is named “Block Win32 API calls from Office macros1 — emphasis mine. Now wouldn’t this suggest that the rule is scoped to (Microsoft) Office only? To me it would. Continue reading

  1. GUID: 92e97fa1-2edf-4476-bdd6-9dd0b4dddc7b []
Posted in Administration, EN, IT Security | Tagged , , | Leave a comment

dumbin.exe, editbin.exe, lib.exe …

They’re all just slim wrappers around the actual link.exe, not using a common DLL or so, but actually invoking:

  • dumpbin.exe simply invokes "link /dump" and failing that "link.exe link /dump"
  • editbin.exe simply invokes "link /edit" and failing that "link.exe link /edit"
  • lib.exe simply invokes "link /lib" and failing that "link.exe link /lib"

… respectively; all with the respective command line arguments, you passed to the tools, appended.

I’m currently looking into internals of cl.exe and link.exe and thought I’d share. On the other hand I probably could have gained that insight as well from Geoff Chappell’s website, rather than from IDA 😉 …

// Oliver

Posted in EN, Programming, Reversing | Tagged , | Leave a comment