How do minimalism and Apple products go together, when Apple is so expensive?


I have been using Apple products almost exclusively since the mid-90s. Now and then, I engage in debates about the pros and cons of Apple products compared to their competitors, especially regarding the price difference. And of course, the question arises whether minimalism and using Apple products even go together. It creates an ambivalence between design culture and the contradiction of consumption.

Continue reading “How do minimalism and Apple products go together, when Apple is so expensive?”

Working More Productively with the Apple Stage Manager


Apple’s new macOS version, Ventura, as well as the new iPadOS version 16, which will be released in the second half of 2022, bring many new features. One of the most hyped tools is the new multitasking feature, Stage Manager. Let’s take a closer look at it here.

What exactly does Stage Manager do?

Here’s what the press release says:

Stage Manager provides a completely new multitasking experience, where apps and windows are automatically organized, allowing users to quickly and easily switch between tasks. For the first time, users can create overlapping windows of different sizes in a single view on the iPad, drag and drop windows from the side, or open apps from the Dock to create groups of apps—enabling faster, more flexible multitasking. The window of the app the user is working in is displayed in the center, while other open apps and windows are arranged on the left side in order of their recency.

Apart from the marketing fluff, there are three key pieces of information here:

  • On the left, apps and windows are arranged in order of their recency.
  • You can group apps and windows.
  • On the iPad, you can now use overlapping windows of different sizes (we’ll cover the limitations below).

Let’s first take a look at the macOS version. In the following screenshot, we can see 5 apps/windows on the left side. If you look closely, you’ll notice even more, as two apps/windows are already grouped (at the very bottom).

When you click on these windows, you’ll see them stacked on top of each other, here with a different example:

On my rather small 14″ screen, this doesn’t make much sense. While I can still switch between windows using Command-Tab, I can’t see the windows related to my task in the way I need to. With such a small screen, it’s probably better to place each window needed for a task separately in Stage Manager.

The organization of windows still doesn’t work all that well. For example, RStudio opens a new window when I commit code. This is not assigned to the main RStudio window but instead opens as a completely new window. This is also visible in the screenshot above with a Mail window. It doesn’t seem fully thought through to me.

However, what’s kind of nice: If you’re watching a YouTube video in a browser window, it will continue playing in the left sidebar. Not that you’d be able to see much, but in YouTube’s Theater mode, you can still follow the video a bit. How this benefits concentration is another matter.

What are the advantages?

At first, I was a bit disappointed with Stage Manager. What’s supposed to be better about switching between different apps for a task? For me, the advantage lies in something completely different, which Apple probably didn’t intend.

When you switch from one app to another today, you lose sight of the previous app. This can lead to forgetting what you actually wanted to do (“Quickly check what exactly was written in the email… oh, there’s a new email, I need to read that first”). However, because the previous apps are still visible, you’re quickly reminded of what you were actually supposed to do. This has worked quite well for me in the few days I’ve been using Stage Manager.

How does Stage Manager work on the iPad?

Stage Manager is also available on the iPad, but only for iPads with an M1 processor. My less-than-a-year-old iPad Air cannot use Stage Manager. Nevertheless, I was able to test Stage Manager on another iPad.

First of all, I wondered how much sense Stage Manager makes on a small iPad screen. Of course, iPads can also be connected to an external display, and it likely works well in that case. Otherwise, I see the same advantages and disadvantages as with the macOS version. Here’s the screen with grouped apps on the left:

The stacked windows on the iPad make even less sense to me here, though I only have an 11″ model.

Do you really need Stage Manager?

I’m a bit concerned that Stage Manager will meet the same fate as Mission Control: hardly anyone knows about the feature, and most users probably only stumble upon it by accident. Additionally, Stage Manager needs to be activated first. My guess is that most users install the new OS versions simply because they are installed automatically, not because they really want them (unlike in the past, when people eagerly awaited a new macOS version, like macOS 8 in 1997, for which you also had to pay nearly 200 euros). On the other hand, sometimes you only realize how good a feature is once you have it.

The other new features in the latest OS versions are cosmetic. The system preferences on macOS now look exactly like those on iOS and iPadOS. I’m really curious about Freeform, but unfortunately, it’s not included in the beta version yet.

Apple MacBook Pro M1 Max – Is it worth it for Machine Learning?


Another new MacBook? Didn’t I just buy the Air? Yes, it still has warranty, so it makes even more sense to sell it. I’m a big fan of the Air form factor, and I’ve never quite warmed up to the Pro models. However, the limitation of 16GB of RAM in the MacBook Air was hard to accept at the time, but there were no other alternatives. So, on the evening when the new MacBook Pros with M1 Pro and M1 Max were announced, I immediately ordered one – a 14″ MacBook Pro M1 Max with 10 cores, 24 GPU cores, a 16-core Neural Engine, 64 GB of RAM (!!!), and a 2TB drive. My MacBook Air has 16 GB of RAM and the first M1 chip with 8 cores.

Why 64 GB of RAM?

I regularly work with large datasets, ranging from 10 to 50 GB. But even a 2 GB file can cause issues, depending on what kind of data transformations and computations you perform. Over time, using a computer with little RAM becomes frustrating. While a local installation of Apache Spark helps me utilize multiple cores simultaneously, the lack of RAM is always a limiting factor. For the less technically inclined among my readers: Data is loaded from the hard drive into the RAM, and the speed of the hard drive determines how fast this happens because even an SSD is slower than RAM.

However, if there isn’t enough RAM, for example, if I try to load a 20 GB file into 16 GB of RAM, the operating system starts swapping objects from the RAM to the hard drive. This means data is moved back and forth between the RAM and the hard drive, but the hard drive now serves as slower “RAM.” Writing and reading data from the hard drive simultaneously doesn’t speed up the process either. Plus, there’s the overhead, because the program that needs the RAM doesn’t move objects itself—the operating system does. And the operating system also needs RAM. So, if the operating system is constantly moving objects around, it also consumes CPU time. In short, too little RAM means everything slows down.

At one point, I considered building a cluster myself. There are some good guides online about how to do this with inexpensive Raspberry Pis. It can look cool, too. But I have little time. I might still do this at some point, if only to try it out. Just for the math: 8 Raspberry Pis with 8 GB of RAM plus accessories would probably cost me close to €1,000. Plus, I’d have to learn a lot of new things. So, putting it off isn’t the same as giving up.

How did I test it?

To clarify, I primarily program in R, a statistical programming language. Here, I have two scenarios:

  • An R script running on a single core (not parallelized).
  • An R script that’s parallelized and can thus run on a cluster.

For the cluster, I use Apache Spark, which works excellently locally. For those less familiar with the tech: With Spark, I can create a cluster where computational tasks are divided and sent to individual Nodes for processing. This allows for parallel processing. I can either build a cluster with multiple computers (which requires sending the data over the network), or I can install the cluster locally and use the cores of my CPU as the nodes. A local installation has the huge advantage of no network latency.

For those who want to learn more about R and Spark, here is the link to my book on R and Data Science!

For the first test, a script without parallelization, I use a famous dataset from the history of search engines, the AOL data. It contains 36,389,575 rows, just under 2 GB. Many generations of my students have worked with this dataset. In this script, the search queries are broken down, the number of terms per query is calculated, and correlations are computed. Of course, this could all be parallelized, but here, we’re just using one core.

For the second test, I use a nearly 20 GB dataset from Common Crawl (150 million rows and 4 columns) and compare it with data from Wikipedia, just under 2 GB. Here, I use the previously mentioned Apache Spark. My M1 Max has 10 cores, and even though I could use all of them, I’ll leave one core for the operating system, so we’ll only use 9 cores. To compare with the M1 in my MacBook Air, we’ll also run a test where the M1 Max uses the same number of cores as the Air.

How do I measure? There are several ways to measure, but I choose the simplest one: I look at what time my script starts and when it ends, then calculate the difference. It’s not precise, but we’ll see later that the measurement errors don’t really matter.

Results: Is it worth it?

It depends. The first test is somewhat disappointing. The larger RAM doesn’t seem to make much of a difference here, even though mutations of the AOL dataset are created and loaded into memory. The old M1 completes the script in 57.8 minutes, while the M1 Max takes 42.5 minutes. The data are probably loaded into RAM a bit faster thanks to the faster SSDs, but the difference is only a few seconds. The rest seems to come from the CPU. But for this price, the M1 Max doesn’t justify itself (it’s twice as expensive as the MacBook Air).

Things get more interesting when I use the same number of cores on both sides for a cluster and then use Spark. The differences are drastic: 52 minutes for the old M1 with 16 GB of RAM, 5.4 minutes for the new M1 Max with 64 GB of RAM. The “old” M1, with its limited RAM, takes many minutes just to load the large dataset, while the new M1 Max with 64 GB handles it in under 1 minute. By the way, I’m not loading a simple CSV file here but rather a folder full of small partitions, so the nodes can read the data independently. It’s not the case that the nodes are getting in each other’s way when loading the large file.

The Tool Craze: Working More Productively with Built-in Tools


As a student, I once had my professor’s laptop in my hands because I was supposed to configure something. This was around 1998, and he had a cool Wallstreet PowerBook from Apple. I was shocked by what he had installed. Almost nothing. Just what came with the operating system, and that wasn’t much. All of his texts were written with TextEdit, the MacOS editor. No WordPerfect (which was still popular at the time), no Microsoft Word, nothing. Back then, I didn’t understand it. How could he not install more programs that would make his work easier? Today, that professor is my role model, at least in terms of his simple approach to using his computer.

Since then, I’ve seen countless tools that were supposed to help with productivity or organizing oneself and one’s knowledge. Some of them I’ve tried or even used for a longer period. Hardly any of them proved themselves over time, whether because the developers gave up due to declining demand (like with Life Balance), or because an app became obsolete with newer technologies (like Apple’s HyperCard being replaced by the World Wide Web), or because the buyer of a startup product like Wunderlist preferred to replace it with their own Microsoft “To Do” and simply shut down the acquired software. In the 2000s, Omni Group’s tools like OmniFocus, OmniOutliner, etc., and Evernote were the hot stuff. Today, it’s things like Notion and similar tools.

The more tools I’ve seen, the less I believe in them. Or rather, I no longer believe that there’s an app for everything, or that there should be. Competence is more important than a tool. A tool can’t compensate for incompetence. It hardly matters which tool you use if you know what you’re doing. The reverse doesn’t work. A fool with a tool is still a fool.

Just as one should question the added value a new app might bring in the realm of digital minimalism, one can also simply ask whether a program already installed with the operating system can’t do the job just as well. Apple’s Reminders app, for instance, is now quite decent and syncs across all devices, just like Apple Notes. I have no idea about Microsoft Windows, maybe it works just as well there. The Google universe also offers a cross-device experience with all kinds of tools. Of course, one can and should also ask whether it makes sense to entrust one’s data to any company. If you want something more complicated, you can find plenty of built-in tools on Linux systems.

The approach of working almost entirely with built-in tools has many advantages. No FOMO. Simply ignore everything that’s being sold to you as the latest productivity hack. No more cluttering up the hard drive. Instead of productively procrastinating by searching for and learning new tools to make the upcoming work faster, just do the work that needs to be done. The few software tools I now use in addition can be counted on two hands, e.g., R, RStudio, TexShop, Ableton Live… and maybe I could have done the latter with GarageBand as well. My dock has remained unchanged since the initial installation of the computer.

Next, I’ll be discussing the organization of my files. More on that later.