Wi-Fi issues: Using telephone cable for Ethernet


One of my biggest mistakes during the renovation of the apartment was deciding not to install Ethernet sockets that the electrician had offered. I thought, “I have Wi-Fi.” Not to mention that back in the ’90s, I had installed 10BaseT network cables, then 100BaseT came along, and I figured if I installed 1000BaseT now, it would be outdated in a few years anyway. I had already tried many solutions—Powerline (internet over power lines), Google Wi-Fi, Orbi, and at one point even an AVM Fritz mesh network, but that was awful. AVM seems to be particularly sensitive when it comes to Dynamic Frequency Selection (DFS). So-called prioritized users, like military, weather stations, and air traffic control, use radar in the same frequency ranges as parts of the 5GHz spectrum, and as soon as another user is detected, the router drops out of that range. With AVM, this seems to immediately switch to the 2.4GHz band.

With the rise of working from home, Wi-Fi issues have become much more apparent. Sometimes, nothing would work at all. My gigabit connection would only deliver 100Mbit or less to the office on the other side of the apartment, sometimes more, and sometimes nothing at all. The following speed test from Vodafone shows one of the better moments, which were only possible thanks to the new Orbi-system. Many others would probably envy this performance. But during a 90-minute video conference, this speed doesn’t hold up, and interruptions are just annoying.

Old buildings can have really thick walls, and on top of that, there’s the R.A.D.A.R and neighbors with their own Wi-Fi networks. But then I thought, why not use the telephone cables that are already in the walls? Almost every room has a telephone socket that we don’t use. First off, the cable isn’t designed for this purpose. It’s physically impossible to use telephone cables as Ethernet cables. But is there really no way around this?

Actually, there is a way. But it takes a bit of fiddling. I’ll spoil the result right from the start: these are the values I was able to achieve with this solution:

That’s more than perfect. And on top of that, it’s stable. No more outages. How does this work? Of course, I couldn’t repurpose the telephone cables as Ethernet cables. But there is a technical solution that can be implemented with a bit of extra hardware. The G.hn Wave 2 approach, which is used in devices like those from GiGa Copper, allows you to plug an Ethernet cable into a special modem and receive the internet signal again via Ethernet on the other side. The small master modem is plugged directly into my cable router.

The purchase didn’t work out for me via eBay— I don’t have PayPal, and there were constant errors with the other payment methods. Apparently, you can only pay a few times without a PayPal account. However, when I called GiGa Copper directly, I got expert advice and was able to order at a slightly better price. It wasn’t as easy as it seemed on GiGa Copper’s website, though. The devices didn’t establish a connection through the telephone cables right after being plugged in. Although the telephone sockets in my apartment were correctly wired and connected, it seems that the GiGa Copper modem expected a slightly different wiring layout. In addition to the 230€ for the modems, I had to pay someone to fix the wiring. It wasn’t exactly cheap, but the result was worth it. Especially for important presentations and lectures, it gives me “peace of mind,” knowing I don’t have to constantly worry about whether the Wi-Fi will hold up.

First experiences with the Apple Silicon Macs with the M1.


I have already experienced a processor change at Apple. My Apple career began in 1996 with a PowerBook 5300, which I absolutely loved—despite its 640×480 pixel grayscale display. On the one hand, a Mac laptop at that time was still something very special and rare (admittedly, at an exorbitant price, but it had been provided to me by my then employer), and it had a keyboard that felt incredibly good and, above all, sounded wonderfully. On the other hand, compared to the Windows PCs I had used before, it was also extremely reliable. With 8 MB of RAM and a 500 MB hard drive, it was also quite well-equipped. This PowerBook was the first to feature a Motorola PowerPC processor, so there had already been a kind of transition shortly before.

In 2006, Apple switched to Intel processors, a move that was extraordinary at the time, especially since in the 90s, Apple had aired commercials where a Motorola processor roasted an Intel processor. For the transition, Apple offered a program called Rosetta, which allowed PowerPC applications to run on Intel-based Macs. Typically, these programs ran slower. The commercial was actually referenced again when the first Intel Macs were introduced, around minute 1:05 of the presentation.

Now another transition. In 2019, I bought the 16″ MacBook Pro after many years with a MacBook Air. I hadn’t kept any other Apple computer longer than the Air, but over time, it had become too slow for what I was doing with it (R, a lot of work in the terminal with sed, awk, Lightroom, etc.). I hadn’t upgraded earlier because I absolutely didn’t want the awful butterfly keyboard. The return to scissor-switch keyboards began with the 16″ MacBook, but I still couldn’t get used to the huge device. Not to mention, it became incredibly hot and loud, and the battery life was far from Apple’s claims. For example, when I trained a machine learning model, the MacBook got so hot that I didn’t need to heat my office anymore. And during any Zoom or Webex call, the battery drained faster than an ice cube melting in the summer heat.

I spend quite a bit of time waiting for the results of a calculation, even if it’s only 20 or 30 seconds sometimes, but it adds up over the day, and sometimes it’s several minutes or even hours. I usually know in advance how long it will take, but I don’t start another task for just half a minute because it disrupts my train of thought. Data analysis is also a meditative act for me. So, the speed of a computer is extremely important. Not just for data analysis, but for all other tasks on the computer as well. It just has to feel smooth.

The speed of a calculation in R depends on many factors:

  • Memory (yes, R loads everything into memory)
  • Processor speed
  • Parallelization

For memory, the first Apple Silicon models aren’t particularly well-equipped—16 GB is the maximum. It doesn’t help that the path from the processor to memory is especially short. The operating system uses part of the memory, the running programs also use some, so there’s not much left. Especially when working with large files, as I often do, which can sometimes reach 50GB or more, swapping is almost “predetermined.” Parallelization is not possible yet, as the necessary packages are not available—Homebrew, for example, is still not available.

Additionally, R is currently not available for the new Macs. It lacks (still) a Fortran compiler, and this is not only a problem for R but also for many machine learning software extensions for Python. Who would have thought that this old programming language could still have such a big influence today? Of course, R also runs via Rosetta, but then I could have just not bought a new Mac and let myself be used as a beta tester for Apple 🙂 But, small spoiler: Even with Rosetta, the Intel version of R runs faster on the M1, and it seems that’s not just the case for me.

I initially purchased a Mac mini with 8GB of RAM and a 512GB SSD to test how good the performance actually is and whether I could make the transition. I was able to pick up the Mac mini the same day from the Apple Store, and from the start, I was amazed at how smooth everything felt on this computer. R worked flawlessly, though RStudio showed error messages frequently. No big deal. But it soon became clear that the memory limitation was an issue. When trying to process a 200GB file (using sort, awk, sed in the shell), at some point, the hard drive filled up with swapping, and the process failed. Okay, maybe the mini is just a bit too weak for that task. What surprised me, though, was that not once did the fan kick in—this would not have been the case with the 16″ MacBook Pro. So, all in all, everything seemed great…

…except for the Bluetooth. My Mac mini also had the well-known Bluetooth problems. Specifically, the mouse loses its connection multiple times a day, which is extremely inconvenient when you’re showing a demo during a video conference. Not good, very frustrating. I tried all sorts of tips, including using a wired connection to the network instead of Wi-Fi. No improvement. It’s unclear whether this is a hardware or software issue. A chat with Apple Support dropped multiple times, and eventually, I got tired of it because, you know, I have a job too. An update to the Big Sur beta helped a little, and as of yesterday, the computer is running on 11.1, so I’m hoping it will be better now, and that it’s not a hardware issue.

Another not-so-pleasant experience was the sound. I have never experienced an Apple computer with such poor sound quality—my old PowerBook 5300 probably sounded better. They could have definitely done much more with the sound.

Despite the Bluetooth issues, after 2 days, I decided to also buy a portable Apple Silicon Mac. In full configuration (16GB RAM, 2TB SSD), it costs about the same as I could sell my 16″ MacBook Pro for on the used market, and at the same time, I get double the storage space. There used to be a rule that you should calculate how much storage you might need and then multiply that size by 4. Unfortunately, there are no 8TB SSDs for these computers yet.

The computer arrived after almost 3 weeks, one week earlier than expected. Here, I noticed a small speed boost, likely due to the doubled RAM. The 200GB file also went through smoothly now, thanks to enough space on the SSD. And, just like with the Mac mini, the computer hardly seemed to break a sweat. Only once did the computer get a little warm, but not hot, and certainly not as hot as the 16″ MacBook Pro. This is also reflected in the battery life. I have yet to drain the battery in a single day. No kidding. I plug the computer in at night, and I usually still have a few hours of battery life left. It’s a completely new feeling.

The Bluetooth issue also exists with the MacBook Air. This is unpleasant, and I wonder how it could have gone unnoticed in the tests Apple conducts. That a transition doesn’t go completely smoothly is understandable, and you’re always somewhat of a guinea pig when buying the first model after a major shift. For me, it’s a trade-off: How much time do I gain by having a fast computer versus how much time do I lose when something occasionally doesn’t work. The mouse connection is of course a hygiene factor; it should just work. But with the MacBook Air, I’m not as reliant on it. So far, I’m happy with my decision, though I would have preferred 32GB or even 64GB of RAM. But those options aren’t available yet.

The sound of the MacBook Air is much better than my old Air’s, but it doesn’t compare to the 16″ MacBook Pro. No surprise, the speakers are much smaller. Still, it’s better than the mini’s sound.

The instant wake feature actually works, and sometimes I wonder if the computer was even “asleep.” The keyboard sounds almost as nice as that of the PowerBook 5300, and if anyone wonders why a keyboard should sound good, well, aesthetics don’t stop at just the visual 🙂

Sonos: The Great Love That Didn’t Survive Everyday Life


I was probably one of the first buyers of Sonos speakers in Hamburg. The boxes were new, expensive, but exactly what I was looking for. And I was super happy and deeply in love with Sonos. I love music. And the ability to listen to my music in any room at any time was phenomenal. The app responded instantly, there were never any issues, and soon I had 3 speakers (there was only one back then, the large one) and a bridge to which I connected my hi-fi system. The latter never really worked well, so I stuck with the pure Sonos system.

As with any love, when everyday life sets in, the picture starts to get scratches. Although my Sonos family grew larger over time, the Sub was quickly added, along with 2 ones, 3 threes, and a Symfonisk, but over time, the system became more unpredictable. Eventually, I lost interest in listening to music with Sonos. Every time I wanted to start a song, I was already afraid it wouldn’t work flawlessly. The music would cut off, and the wildest error messages appeared in my once harmonious SONOS world:

  • Connection to Sonos product not possible
  • Cannot play the selected item
  • Cannot connect to the device. Please try again later.
  • Error adding titles to the list (1002)

Or sometimes, nothing would happen at all. The spinning wheel of death would appear. Or music would play and immediately stop, or it would skip to the next track, and so on… The software, whether on mobile or desktop, no longer responded in real-time.

Of course, my home network isn’t exactly simple. I had Google Wifi for a while, then switched to Orbi, and recently an AVM Fritzbox has been doing its job. I tried using cables, creating a separate network for the Sonos boxes, removing the Gen 1 devices to switch to Gen 2, but… nothing worked. Studying the support page (accessible at http://IP-ADDRESS:1400/support/review) didn’t help at all, the quality of the connections between devices was mostly suboptimal.

A few minutes later:

A few minutes later again:

My last hope was the Sonos Boost. For nearly 100€, you get a Wi-Fi extender, but it’s more like the Boost creates its own Wi-Fi network for the Sonos devices, so they no longer cause confusion in the main Wi-Fi network. You can see the Boost in the screenshots above, and what you can also see is that it doesn’t really improve the situation. It feels like there are fewer problems with the main Wi-Fi, but Sonos is still sluggish and mostly unresponsive. It got a little better after I removed the Symfonisk from IKEA. Apparently, Sonos doesn’t like these devices very much, or maybe the antennas in the devices just aren’t that good.

When you also consider that Sonos tried to persuade owners of old Sonos devices to dispose of them and buy new ones in order to enjoy the next generation of software, unfortunately, Sonos is no longer recommended. Through my PhonieBox experiment, I’ve learned that you can build Wi-Fi boxes more cheaply on your own.

OpenMediaVault or NextCloud?


In recent years, I’ve tried several NAS systems: Synology, QNAP, NextCloud, and OpenMediaVault. I can only advise against Synology and QNAP; you pay a lot of money for a more or less nice enclosure, but the underlying software is outdated, and the performance compared to open-source alternatives is subpar.

Two open-source alternatives are NextCloud and OpenMediaVault. They have a major disadvantage: you can’t buy them with hardware. The software is available for free online, but you have to source the hardware yourself. This is, of course, a hurdle. I’ve had very good experiences with NextCloudPi. It’s like a local Dropbox at home for very little money. However, strictly speaking, NextCloud is not a NAS but just cloud software. Setting up a shared folder that you can mount as a drive involves a bit more effort. But it does offer software that you can install on your phone and computer to sync part of your hard drive.

OpenMediaVault offers exactly that: a hard drive on your own network, but lacks what NextCloud offers. There’s no software for mobile or desktop, drives must be manually mounted, and nothing gets synchronized. OpenMediaVault requires Debian, whereas NextCloud is more flexible. However, OpenMediaVault isn’t particularly resource-hungry. I haven’t yet been able to stress my installation on the Raspberry Pi 4 with 2GB. Outside the network, you can only access your files via VPN, which is not included and must be installed either through the router or another system. OpenMediaVault is compatible with Apple TimeMachine!

To summarize, I would describe it like this:

  • Avoid buying systems as much as possible and save a lot of money.
  • NextCloud is great if you want an alternative to a cloud service.
  • OpenMediaVault is good if you need shared drives.

For both systems, NextCloud and OpenMediaVault, you still need some other system to back up your data. Yes, you may have your data stored locally on a second medium, but if your house burns down, you have a problem.

OpenMediaVault with the Raspberry Pi 4


After removing the Raspberry Pi 4 from my kids’ Phoniebox and replacing it with a less power-hungry Raspberry Pi 3, I was looking for my next project. I’ve since moved away from NextCloudPi, as managing it with the iPad/iPhone/Mac was just too cumbersome. However, I’ve been wanting to replace the QNAP for a long time, and I’ve always wanted to try OpenMediaVault as a file server, including for TimeMachine backups.

The QNAP Mispurchase

What bothers me about the QNAP? I had bought the TS-431X2 for over €500, and it has a Quad-Core AnnapurnaLabs Alpine AL-314 1.7 GHz processor. Using Docker on it is not enjoyable, which was actually one of the reasons I bought the QNAP NAS. Unfortunately, the QNAP wasn’t much faster than the Synology I had before. But the worst part was the outdated libraries, especially with regard to the web server. You can install NextCloud with a lot of workarounds, but it’s far from user-friendly, and the support was not very helpful. And then there were constant error messages or warnings popping up.

In addition, setting up shared folders and so on is also quite complicated. I just got frustrated with the thing.

Is OpenMediaVault Better?

First of all, it’s different. OpenMediaVault is NAS software, but unlike QNAP or NextCloud, it doesn’t offer desktop syncing. You can’t buy ready-made hardware, plug it in, and have it work. It’s certainly not suitable for average users, but they probably wouldn’t buy a NAS from QNAP anyway 🙂

If you know a little about Linux, the installation is relatively simple:

  • Flash Raspbian Buster Lite (or whatever comes next) onto a microSD card. Raspberry Pi provides an installer for Mac and PC. Just make sure to choose the Lite version, not the standard Raspbian.
  • Copy an empty file named ssh to the boot volume so you can log in via the terminal (assuming you have an Ethernet connection).
  • Insert the card and start the Raspberry Pi.
  • Then find the IP address of the Raspberry Pi and log in with:

ssh pi@IP-ADRESSE

  • and log in with the password raspberry,
  • then, after all necessary updates, run:

sudo apt-get update

sudo apt-get upgrade

then it’s just:

wget -O – https://github.com/OpenMediaVault-Plugin-Developers/installScript/raw/master/install | sudo bash

and then a reboot. After that,

  • enter the IP in the web browser,
  • use admin and openmediavault as the login,
  • and you’re in.

My 1TB SSD was immediately recognized, but it had to be formatted and then mounted.

First Problems

Unfortunately, that was the end of the easy steps. For example, Sonos wouldn’t connect to OpenMediaVault. The trick is that you need to change the SMB configuration. This can be done in the interface. I was also a bit shocked at first by the following error message:

It couldn’t get much more dramatic. At first, I thought I had encountered a kernel panic. However, the error message only indicated that the session had expired.

Then came network connection drops, adding to the initial problems. And these were serious. My SanDisk 1TB Ultra SSD is brand new, but apparently, the Raspberry Pi 4 doesn’t like the adapter:

UAS_EH_ABORT_HANDLER is an error message you don’t often see. A bit of research shed some light on the issue. By the way, the change in /boot/cmdline.txt should actually be at the beginning of the line and not create a new line. If you do it the wrong way, the Raspberry Pi won’t boot anymore—I’ve already tried that for you 🙂

As slow as it was expected to be, the connection is not that bad now despite the quirk:

Just under 600 Mbps is something I’ve rarely seen, of course, this was measured on the Ethernet adapter, so it might not be as fast when writing to disk. But at least I haven’t had any network dropouts so far. What stands out is that the RAM is used much less than with the QNAP. And on the Raspberry Pi, I can even run R 🙂

Conclusion

Unfortunately, there is still no suitable case for my intended project. I’d like to fit the Raspberry Pi plus 2 SSDs into a case. That would obviously be a good excuse to finally buy a 3D printer, but honestly, I don’t have time for another hobby.

Overall, this setup with the Raspberry Pi and OpenMediaVault makes a good impression. For a fraction of the cost (under €100), you get more performance than with the expensive QNAPs or Synologys of the world.

Shopping List

Phoniebox: From MVP to the Real Deal (Toniebox Alternative)


Preliminary: The first Phoniebox, my MVP (Minimum Viable Product) in a cardboard box, was an instant hit. The RFID cards, which were decorated and painted, are treated like the greatest treasure, sometimes even hidden or brought to the table. So, it’s time to turn the MVP into a proper box. If you haven’t read the first part about my MVP, the Phoniebox is an open-source alternative to the Toniebox that you can build yourself. The advantage is that you don’t have to buy expensive Tonies but can use cheap RFID cards or stickers to play your own selected music files.

I’m not particularly handy, and the question of how to make holes in the wooden box had me stumped. I didn’t want to just drill small holes again, especially since I wanted better speakers this time. And that’s how things escalated. This box ended up being considerably more expensive. What did I buy?

Total: approximately €200! The saw attachment can be used again, and handier people may already have such tools at home. But you should be aware that the Phoniebox can initially be much more expensive than the Toniebox. However, if you were to buy multiple Tonies for the Toniebox, and let’s just take the Creative Tonies at €11.99 each as an example, you’d be at the same price after 10 Tonies. From the 11th card onwards, the Phoniebox becomes cheaper. In terms of sound quality, my box is definitely in a different league than the small Toniebox.

Activating the MiniAmp wasn’t entirely straightforward. Olaf Splitt describes the necessary steps very well here. However, it seems I had already installed the Toniebox software, which unfortunately occupies some of the GPIO pins. These need to be disabled. What was confusing was that the Raspberry Pi detects the sound card, and you can adjust the volume, but unfortunately, no sound comes out. This made troubleshooting tricky. But the sound, combined with the speakers, is really great. I initially assembled the parts without the box just to check if everything works.

The EasyAcc power bank is one of the few that can provide enough power while also charging. This ensures uninterrupted music enjoyment. However, I haven’t had good experiences yet in terms of more power being pumped into the battery than is being used by the Raspberry Pi. Olaf Splitt doesn’t address this point clearly in his otherwise great guides. He even mentions weeks (!!) before needing to recharge the power bank. The big difference is probably that I’m using a Raspberry Pi 4, and it drains the EasyAcc power bank faster than it can recharge. The power bank can theoretically be charged with 5V 4A, but only if both USB ports are used for charging. Otherwise, it remains at 5V 2.4A. The Raspberry Pi 4 is typically powered with a 5V 3A power supply, but it should also run with 5V 2.5A, depending on the peripherals connected. In my case, that’s the MiniAmp and the USB card reader. I may end up replacing the Raspberry Pi 4 with a Raspberry Pi 3.

Here’s the final result, though the video doesn’t quite reflect the actual, really good sound quality.

 

This is what the inside looks like:

Other DIYers had removed the card reader from its plastic casing and glued it directly inside. However, it works well like this too, as the box wall isn’t so thick that the card signal wouldn’t work. An additional Wi-Fi antenna wasn’t necessary either, but our apartment’s Wi-Fi signal is also very strong.

I haven’t installed the USB port yet. In the next step, I also plan to add a power switch that will properly shut down the box before cutting off the power.

Phoniebox, the Affordable Alternative to the Toniebox (My First Prototype and MVP)


When I was a child, if there was one thing I loved, it was listening to records or cassettes for hours. Our kids also love music, and luckily, famous arias from The Magic Flute are a hit, but so are Kraftwerk’s The Robots or Herman van Veen’s quirky adventures. But how can we give toddlers access to “their” music in a world dominated by Spotify and Sonos?

A Toniebox wasn’t an option for us, even though its user interface is very child-friendly. Fortunately, there are open-source alternatives, and what’s even better, many of the components needed for this project I already had at home. One such project is the Arduino-based Tonuino, which has the big advantage of being very power-efficient and having an extremely quick boot time. The other project, which I replicated, is the Phoniebox, based on a Raspberry Pi 3. It has a longer boot time of more than a minute if the box is unplugged, and of course, it consumes more power. The big advantage for me here is that music can be managed via a web interface. Plus, I can easily turn the volume down when the kids get too excited 🙂

How does it work? On the SD card, folders are created in a specific directory, where music files or stream URLs are stored. The RFID cards are linked to these folders, so each card “triggers” the playback of the music in the respective folder. No data is saved on the card itself, and you can always reconfigure the card-linking. You can paint or sticker the cards so the kids have a reference for which music, audiobook, or stream corresponds to a card.

The shopping list on the Phoniebox site was a bit confusing and sometimes unsuitable. Here’s what I purchased:

Total: €94.15. For comparison, a Toniebox costs €79.90. It comes with one Tonie, with additional pre-recorded Tonies priced at €14.99 and creative Tonies for €11.99. If I understand the concept correctly, you can load 90 minutes of content onto a Creative Tonie via the cloud, and while you can use them offline, the content is still managed online. With my 50 RFID cards, I essentially already have 50 Tonies, just without content. The content can be either something I already have (I had many of my old cassettes digitized because I couldn’t get them as CDs) or can be sourced cheaply. So, an RFID card can also be linked to a Spotify song, album, or even a radio station. Overall, it’s cheaper if you have more content, plus you’re independent from any platform. Building a box with Arduino would likely be much cheaper, and the battery would probably last much longer too.

The setup of the PhonieBox took me about 90 minutes:

  • Flash the Raspbian Buster image onto the SD card.
  • Pre-configure SSH and WiFi so I wouldn’t need a monitor or keyboard.
  • Assemble the components and connect the power.
  • Log in via SSH and start the installation with a one-liner.
  • The box is then ready to use!

Here’s what it looks like when initially assembled without a case:

Most of my time was spent loading the cards with content. You can either upload the audio files directly via SMB to the drive or through the web interface. However, not all MP3s were immediately recognized.

I hadn’t initially planned for a case, as I wanted to first check if I could even assemble the Phoniebox and if the kids could manage it. Therefore, the first version didn’t turn out very pretty, but hey, it’s a Minimum Viable Product 🙂

Much nicer boxes can be seen here:

What are the experiences after a few hours/days?

  • The box is loved dearly and sometimes even fiercely contested. So definitely a second box is needed.
  • At 2 1/2 years old, the kids don’t fully understand that multiple songs can be played with one card, and that the card needs to be placed back on the reader to move to the next track (if configured this way). So, I’ve set it up to restart the song from the beginning when the card is used.
  • It’s also a good idea to start with only a small selection of cards; otherwise, it can be overwhelming for the little ones. Our cards include:
    • “Zu Hilfe, zu Hilfe” from Die Zauberflöte
    • “Der Vogelfänger bin ich ja” from Die Zauberflöte
    • “Weg da” by Herman van Veen
    • “Das Lied der Schlümpfe” by Vader Abraham
    • “Die Roboter” by Kraftwerk
    • “The Young Person’s Guide to the Orchestra”
    • “Peter and the Wolf,” narrated by Loriot
  • One of the kids has hidden the card for the first track in the list—it’s his greatest treasure, which he barely lets go of.
  • I can’t recommend an Anker PowerBank, as it doesn’t charge while supplying power to the Raspberry Pi. Therefore, for version 0.2, I’ll switch to this model. This will make the device an additional 33€ more expensive.

My next version:

  • Will be built in a proper wooden box.
  • I will also upgrade to better speakers and use the HifiBerry MiniAmp.
  • I want to add a socket to the case for charging the battery.
  • I plan to avoid buttons for now; the more parts there are, the more that can break.
  • Finally, the two boxes should also be able to synchronize, and there is a guide by Olaf Splitt for that.

And here’s the post about my second Phoniebox!

Materials for Web Analytics Wednesday, April 8, 2020


It’s great that you were part of the first virtual Web Analytics Wednesday. Here are the promised links:

All links marked with + are affiliate links

Are my texts being read? Implementing analytics in detail


On the occasion of the anniversary of the website Boosting (60th edition!), here is a deep dive on how to create a custom report on the texts read to the end. This is a supplement to my four-part series “Web Analytics: How Data Leads to Action”, in the 60th issue you will find the 3rd part. Basically, I had already written about the topic here in comparison to the scroll depth. This is an example of how custom and calculated metrics can be used.

The screenshot states per page:

  • How many words a text has
  • How many times a page has been viewed
  • The proportion of views that led to an exit
  • The number of visibility of the YARPP element (YARPP stands for Yet Another Related Posts Plugin), which displays similar articles at the end of an article. If this element is visible on the user’s screen, it is assumed that the article above the element has been completed)
  • The percentage of visibility of the YARPP element with respect to all page views
  • The number of clicks on a YARPP link
  • The percentage of clicks on a YARPP link in relation to the visibility of the element

What problem does this report solve?

  • If a text is read to the end less often than other texts, then this text does not seem to be written in such an interesting way.
  • The length of the text could be a predictor of whether a text is read to the end; but if a shorter text is not read to the end, this could be an even stronger signal that the text is in need of optimization.
  • If the links to similar articles are not clicked on even though they are visible, they do not seem to be relevant.

Create the custom dimension and metrics

  • In Analytics, go to Administration (bottom left) and then click Custom Definitions in the Property column.
  • Click on Custom Metrics and then click on the red New Custom Metrics button
  • Choose an understandable name (e.g. “YARPP Lakes”)
  • The Scope is Hit
  • The formatting type is integer
  • The remaining values can be left blank
  • Click Save.
  • Repeat the process once again, this time for the “YARPP Clicks”. The settings are the same.

The first entry should now have an index value of 1, and the second entry should have an index value of 2, unless user-defined variables have already been defined.

If the number of words in a text is also recorded, a user-defined dimension is required. The process is similar, here again select a suitable name and the scope hit. Again, the index value for this custom dimension needs to be remembered or noted, as it will be used later in Google Tag Manager.

Implementation in Google Tag Manager

Once the user-defined definitions and measured values have been implemented, values can now be written to these variables. This is done with the Tag Manager. First of all, the element must be selected on the page where the trigger of visibility should be triggered. The necessary steps for this are already described in this article. Then the following trigger is configured:

The trigger fires a tag, which now also has to be configured:

It is important in this step that the settings are overwritten, as this is the only way to pass a metric as a custom metric (Custom Metrics in the screenshot). Here you have to choose the index value that was defined by Analytics in the step above. The value of the measured value is 1 here, because for each sighting the counter jumps up by 1.

The Scroll Depth Threshold variable is not necessary, it may need to be configured first. This step must then be repeated again for the clicks on a YARPP link and, if applicable, for the custom dimension of the number of words per text. However, these can already be passed in the Google Analytics settings, which are defined as variables. In my case, the configuration looks like this:

As you can see, there are some things special about my configuration, but the WordCount is passed into a custom dimension with an index value of 7.

Creating the calculated metric

In order to display a ratio or conversion rate, a calculated metric is created. These are the columns “YARPP Seen CVR” and “YARPP Click CVR” in the example report in the first screenshot. Note: It may take some time for the custom metrics to be visible here! This means that this step may only be feasible after a few hours or even after a day.

In the Administration screen in the far right column, you will find the entry Calculated measured values. Click on the red button New Calculated Measured Value and then apply the following settings in the following screen. All you have to do is type in the first few letters of the variable name, and Analytics will complete the names. This is the setting for the Click CVR:

For the CVR lakes, the formula {{YARPP seen}} / {{pageviews}} is used.

Create the custom report

Last but not least, a report is now created, as can be seen in the first screenshot above. Under Customization (top left) and Custom Reports, a new report can be created. Here, all currently custom and relevant metrics available from board are selected and the appropriate dimension is selected. Unfortunately, no secondary dimension can be selected here; this must then be done manually when the custom report is invoked.

That’s it! Further valuable knowledge about web analysis can be found in my book “Introduction to Web Analysis”!