I had the honor to represent the University of Zurich at the 2020s European Forum of the School of Economics at Beijing University. My slides on Learning from the China Trade Shock: Lessons for European Monetary Union in a Post-COVID World can be downloaded here and are based on my recent UZH Econ -DP with Lilia Ruslanova
I had the privilege to be invited to a Radio debate on Radio della Svizzera Italiana with former IMF director Carlo Cottarelli to discuss the economic consequences of Mr. Trump’s (first?) administration. Listen to the debate (in Italian) here.
Here is the Upshot of my argument:
For the first three years of his mandate, Trump presided over a goldilocks economy which largely was not of his own making, even though he created a strawfire with an ill-timed and unbalanced fiscal reform that favors the rich and limited fiscal space. The economic consequences of his denial of the COVID challenge will pose the greatest threat to his re-election.
Regarding his America-first agenda, his promise to bring back jobs in the classical manufacturing sector remains largely unfulfilled. His unilateralism alienated European allies and led to a trade war with China which is ill-targeted in its focus on traditional trade in goods. More recently the focus has shifted to a strategic rivalry with China in high technology and the internet as well as to issues regarding IP protection and the level playing field in foreign direct investment. Here he has a point and this is a key issue that will stick around under future administrations, be they republican or democrat.
Future administrations will have to deal with built-in injustices of the tax system that have been reinforced by Trump’s reform. The focus here will not necessarily have to be on fiscal consolidation — the US can afford to borrow and , provided it is doing well economically, it can outgrow even very high levels of public debt — but on tackling economic inequality.
My new paper with Lilia Ruslanova: Softening the Blow: U.S. State-Level Banking Deregulation and Sectoral Reallocation after the China Trade Shock is now online as UZH discussion paper.
The upshot: U.S. state-level banking deregulation during the 1980s considerably dampened the fallout on local economies of the China trade shock a decade later. The reason: households in financially integrated areas could more easily borrow against their housing wealth to smooth consumption. This kept house prices and wages in the non-tradable sector up, facilitating labor reallocation away from manufacturing.
The paper has a a clear take-away for European policy makers in the time of COVID19: the pandemic is likely to be a major reallocation shock, similar to the China Trade Shock, with very heterogeneous effects across regional economies in Europe. But,as our results show, for efficient reallocation to take place, household-level access to finance is paramount. However, cross-border retail financial integration in the EMU basically does not exist because banking integration is still superficial and fragile. Therefore, even in the current situation, EMU policymakers’ homework remains the same: complete the banking union, get an EDIS done, encourage cross-border banking consolidation …
A position paper that I co-authored and that has been endorsed by all faculty at UZH’s department of economics is now available from the department main web page (in German): https://www.econ.uzh.ch/de/newsandmedia/Coronavirus-Positionspapier.html
A web site with videos and slides of the keynote lectures by Yongheng Deng and Christian Hilber at last year’s autumn forum as well along with photo impressions from the forum is available on the Center for Urban and Real Estate (CUREM) page.
As schools and universities have been shutting down around the globe, many of us in academia are wondering how we can get up to speed and to establish a stable workflow that allows us to get our podcasts, on-line lectures and tutorials out there for our students. In this post I provide a subjective list of open source tools that I am using.
There are at least two reasons why open source has a key role to play in the current situation:
- OSS is easy to roll out quickly and in large numbers (e.g. to an army of teaching assistants for multiple tutorial session in big lectures) , without any licensing issues and in a decentralized manner.
- OSS is cheap . Actually it’s free. Hence, no need for financially stretched schools and universities to spend heaps of non-budgeted money on proprietary software at very short notice.
In a note summarizing my panel presentation at the recent Belgian Financial Forum /SUERF Conference “Cross border financial services: Europe’s Cinderella?” now appearing in the Revue bancaire et financière, I argue that cross-border banking consolidation is a prerequisite for better risk sharing in the eurozone. However, the incomplete banking union perpetuates regulatory fragmentation an d prevents cross-border consolidation from becoming economically viable. Last but not least, the regional fragmentation of banking markets within many EMU member countries remains one of the biggest obstacles to consolidation, both within and across borders. Download the note here
Our paper Channels of Risk Sharing in the Eurozone: What Can Banking and Capital Market Union Achieve? (with Egor Maslov, Iryna Stewen and Bent E. Sorensen) is now forthcoming in the IMF Economic Review.
In the paper, we argue that the interplay of equity market and banking integration is of first-order importance for risk sharing in the EMU. While EMU created an integrated interbank market, “direct” banking integration (in terms of direct cross-border bank-to-real sector flows or cross-border banking-consolidation) and equity market integration remained limited. We find that direct banking integration is associated with more risk sharing, while interbank integration is not. Further, interbank integration proved to be highly procyclical, which contributed to the freeze in risk sharing after 2008. Based on this evidence, and a stylized DSGE model, we discuss implications for banking union. Our results show that real banking integration and capital market union are complements and robust risk sharing in the EMU requires both.
Our paper “ Holes in the Dike: The Global Savings Glut, U.S. House Prices and the Long Shadow of Banking Deregulation” (with Iryna Stewen) is now forthcoming in the Journal of the European Economic Association.
In the paper, we argue that capital inflows into the U.S. greatly contributed to the housing boom in the years prior to the financial crisis. States that liberalized their banking markets earlier saw bigger run-ups in house prices (and larger busts). The reason for this was the treacherous assumption, that geographically diversified banks should be allowed higher leverage (as would be implied by value-at-risk (VaR) models of bank risk management). States that liberalized their banking markets earlier had a stronger presence of geographically diversified banks by the time the savings glut started to hit the U.S. from the mid-1990s onwards. As we show, using bank-level data, the lending of geographically diversified banks was more sensitive to aggregate capital inflows and counties and states in which these banks had higher market shares saw a bigger expansion of mortgage credit and bigger house price increases.
I firmly believe that using open source software is an important prerequisite for reproducible and accessible research. We cannot expect others (think e.g. students or researchers in developing countries) to buy super-expensive software to reproduce research. We should also make sure that the code we use, including the applications we run the code on, are free and transparent.
Equally, I believe that we as academics have a special responsibility to teach our students to become free and independent digital citizens. That entails keeping a healthy distance to the closed eco-systems of commercial operating systems such as Windows or MacOS (not to speak of Android or iOS) which collect ever more data about everything we do on our computers and online. GNU/Linux operating systems are an open-source alternative where we can actually decide freely how much information about ourselves we share with others. Linux is also a great OS to run scientific software since it is lightweight in terms of resources. Also, most Linux distributions have the most important open source scientific software ready to install in their repositories. So, while setup may be a little more involved than with a Windows or Mac computer, you benefit from it later on, when installing and maintaining things gets a lot easier (If you’re an economist: ever tried to configure a Mac for using, e.g. Dynare and Octave? Well, you see. In Linux Mint that’s two mouse clicks in the software repos….)
I have now used Linux as my main driver since 2009. Since 2012, however, I used a Mac, with Linux running in a virtual machine. This came from my frustration at the time about the many driver and periphery management issues I had encountered with the Linux versions that were around back then. But Linux development hasn’t stopped and today the situation is hugely improved. At the same time, MacOS development has virtually stalled and on the Windows side, forced updates have wreaked havoc on the data of many users. Therefore, it was time again to give a plain Linux install a new chance when my old Macbook Pro had to be replaced recently.
Here are some experiences and things I learned when configuring and installing Linux Mint 19.1 on my new Dell Precision 5530 (64 GB, 2TB SDD, 4K display, NVida P1000) with TB 16 Dock and an external 4K Dell Display. Maybe they are useful for others interested in making the switch.
Before you start, be aware that you can make your life easier by buying the model I have with Linux Ubuntu 18.04 pre-installed. In fact, my machine didcome with Ubuntu pre-configured and it worked very nicely. However, I have a preference for what I consider the rounder (and actually more Windows-like) user interface of Linux Mint. I am just more productive on it. But that’s a matter of taste. Since Linux Mint is based on Ubuntu, most of the experiences below should also be useful for anybody wanting to install Ubuntu on a DELL Precision Laptop.
1.Prepare a bootable USB stick with Linux Mint 19.1
2. Next step is to convince your computer to boot up from the memory stick. Given the secure boot settings on all new machines, that has become a bit complicated in the last couple of years. Here is what to do on the DELL Precision 5530:
— Before we start, note that I did a clean install, wiping my entire hard disk, so I didn’t keep another Ubuntu or Windows installation as dual boot. This would probably complicate things.
— When booting up the computer, press F2 as soon as the DELL logo appears. This gets you into the BIOS. In the BIOS, you have to change the setting to enable legacy boot devices, change to legacy boot in “boot sequence” and then allow to boot from USB, moving this up in the boot order.
–Save these settings, plug in your USB stick and reboot the computer
— As soon as the DELL logo appears, press F12. This should get you into a screen where various boot options are given. If your USB stick is not recognized, follow the link to enable legacy boot, which takes you back to the step above.
— Boot up from your USB stick. If at all possible, in EFI mode.
— This was the hardest part. Once your USB stick is recognized, you will boot into an Linux Mint and can follow the guide through install process. It is important, the installation happens as EFI (legacy boot options do not work permanently in the DELL Precision, so you always fall back into EFI. So if you happen to install Linux Mint in Legacy mode, you will not be able to boot into that unless you repeat the above steps all the time by switching to Legacy boot in the BIOS since the Legay Boot option and sequence does not seem to be saved permanently. This jappened to me until I figured out that once the USB stock is recognized in EFI mode, your installer will automatically guide you through an EFI install process.
— I installed Linux Mint w/o encryption of the Hard Disk. I don’t need that, I encrypt truly confidential data with cryfs on a folder-by-folder basis. I therefore have no idea how full encryption would affect system performance or if it would create issues of its own.
What I would clearly caution against is to choose local user folder encryption at install. That can create all kinds of problems, in particular, you will not be able to use Dropbox for Linux easily anymore. Since I am a heavy user of Dropbox for my collaboriative projects, that was no go.
— Once the installer has completed, you can restart the computer (I had to shut down from the live session pressing the power button, it would not shut down on its own in finite time)
— Machine restarts. You will usually not see a GRUB screen but at one stage, you see the Linux Mint logo appearing. You already are looking forward to starting to use Linux Mint on your DELL. But not so — the boot process will hang before you get to the login screen.
Here is how to deal with this: Restart the computer, pressing the power button. Then press SHIFT (for EFI installs). Eventually, you will see a GRUB menu (which is, by default, hidden after a clean wipe install, since there is only one OS to boot anyway). It may take a couple of attempts to get to the GRUB Menu, since the DELL Precision 5530 is so damn qick (you may also try pressing Esc, but for EFI installs it should truly be SHIFT).
Once you get to the GRUB Menu, press “E”. This will give you the boot loader script for the currently selected boot option (you choose the first one, Linux Mint 19.1, which is set as sdefault anyway. There may be others (including a recovery option))
Once you see the bootloader script, replace the words “quiet splash” with “nomodeset” in the line starting with “linux”.
(See the guide in the Linux Mint forum here: https://linuxmint-installation-guide.readthedocs.io/en/latest/boot_options.html#nomodeset-boot-option)
Be careful not to mistype or delete anything here (the editor is very sluggish and the font will be tiny with a 4k screen). If you think you have mistakenly deleted or changed something else than the above “quiet splash”, just exit and start over with this step.
Once you have successfully typed in “nomodeset”, you can press CTRL+X or F10 and the machine will start to boot and get you to the login screen. Once you are in your Linux Mint session (probably it will be on software rendering mode, making screen display etc quite slow) you should install the nvidia drivers as described here: https://linuxmint-installation-guide.readthedocs.io/en/latest/drivers.html
Once you have successfully installed the driver using NVidia Prime, you can restart the computer (it should now restart from using the appropriate menu items from within the Linux session)
DONE. Once you are restarted and logged in again, you are all set. You can now start to customize and get your Dell ready for work. Two important things I noticed:
- Resume suspend in NVidia mode does not work well, at least not if the NVidia Prime configuration menu (with all the settings for the graphics card is active). So here is what I did: I chose Intel mode (required password and login/ logout). Then suspend / resume works fine. I can even switch back to NVidia mode, activating it by logging out and in again (in which case, I do, however, not get back all the setting options in the prime menu). Then suspend /resume also works. (I have not tested if that really activates the Nvidia GPU, though). Careful however: if you leave NVidia mode on and restart the computer, you will again get the full Prime menu with all the settings for the Nvidia GPU — but also all the problems concerning resume/suspend.
At first, Ithough I would just live with this bug, by keeping the machine mostly in Intel mode where everything works. I have 64 GB of memory, so standard graphics on web pages, video streaming etc. really works very fast, even in Intel mode. However, the NVidia resume-suspend issue evaporated once I tackled another inconvenience:
- Even though resume / suspend worked out of the box in Intel mode, the default mode after the Linux Mint install on my machine was “sleep2idle”. You actually should permanently change the mode to “deep”, which uses considerably less battery (sleep2idle keeps all the USB ports and other interfaces waiting for inputs while “deep” only keeps the RAM refreshed and switches off the CPU and everything else to save energy). See the discussion of system sleep states in the Linux kernel documentation here:
The sleep mode has to be set by passing the parameter mem_sleep_default=deep to the Linux kernel on startup.
This has to happen by editing the GRUB configuration file:
“sudo xed /etc/default/grub”
Then add the parameter “mem_sleep_default=deep” at the end of the line starting with GRUB_CMDLINE_LINUX_DEFAULT, so that it looks like this
GRUB_CMDLINE_LINUX_DEFAULT=”quiet splash mem_sleep_default=deep”
Save the file. Then run
See here on more detail about how to edit the the grub boootloader configuration:
On the next reboot, your machine should start with “deep sleep” enabled. (You can check by typing
“cat /sys/power/ mem_sleep”
in a terminal. This should give you something like
i.e. the current option appears in square brackets.
Remark 1: Setting the sleep mode to [deep] also solved the suspend-resume hick-ups when in NVidia graphics mode. Resume/Suspend now works perfectly fine in both graphics modes.
Remark 2: to get the TB16 dock working on the Dell Precision 5530, you need to disable Thunderbolt Security to “None” in the BIOS and you should also disable the C-Dock option in the BIOS (otherwise you may get periodic disconnects of mice or keyboards or other peripherals attached to the dock).
Remark 3: You also get very good battery life on this powerful machine if you install the tlp tools:
sudo apt install tlp tlp-rdw
Then, after each fresh user login type at terminal
sudo tlp start
Edit (Jan 2019): TLP works fine, but often I would still not get more than 3.5 hrs of effective use from the machine. I used
powertop to find out that the NVidia graphics card was still using power even though the Nvidia-Prime driver would run in intel mode:
sudo apt install powertop
Then run it from terminal with root privileges (
sudo powertop). You can toggle powertops various screens with
Tab and you eventually reach one screen where it lists “good” and “bad” (in terms of battery life) system settings. Here you can play around and check what changing the settings does for battery life. Don’t worry, these are temporary settings and all can usually be changed without endangering the system. Here you often find your graphics card. When you set it to “good”, battery usage usually drops dramatically. So, I decided to tell
tlp do disable the graphics card when it is not in use.
To do so, edit the tlp configuration file with
sudo xed (or whatever the name of your editor) /etc/default/tlp
and change the commented line
# RUNTIME_PM_DRIVER_BLACKLIST="radeon nvidia nouveau"
Then save the configuration file and restart tlp (or your system). The issue is discussed here:
The steps above essentially tell
tlp NOT to exempt any devices(and in apricular not the ones that are governed by radeon (AMD), or Nvidia (nvidia or nouveau) drivers) from its power management. Guess what? Battery usage now increases to 4.30-5 hrs max in normal use (i.e. constant WLAN connection, many browser tabs open, some text or spreadsheet editing and the occasional music stream ,and based on 90% of the original battery capacity remaining).
Of course, you should in normal circumstances ever only use 4 hrs max of these 5 hrs, since deep discharges (i.e. below around 20-25% charge remaining) tend to reduce the lifteime of Li-ion batteries.
Note that 4.5-5 hrs at 90% battery capacity is perfectly OK for a machine with 64 GiB memory and a 4K display. The display certainly uses a lot of energy and on battery I usually keep it dimmed down to 20-30% (which is still incredibly bright. Even on AC power, I don’t ususally turn it to more than 50% brightness since it would hurt my eyes). The display is clearly better an brighter than my previous Mac’s (2015) Retina screen and with the tweaks above, the battery life is now also quite comparable.
Update I, Jan 2020: Activating hibernation
By default, current Linux distros do not support hibernation (i.e. saving an image of the current system state to disk). I have found that this is a little annoying since putting the system to sleep will still draw battery. On occasions I have therefore found that I had to restart my system since the machine ran out of battery while on sleep. Besides the inconvenience of having to restart the system, running down the battery reduces its lifetime and capacity (see below). Fortunately, hibernation can be re-enabled. The requirement is that you have a sufficiently large (i.e. bigger than your RAM) swap partition on your disk. When you have done a fresh standard install of Linux Mint, this will generally be the case (dual boots can be tricky, though). This web page has a great tutorial on how to get hibernation working in Linux Mint (sorry, in German):
Update II, Jan 2020: Installing a Logitech Trackman Marble
Works out of the box, but wheel scrolling is not set up by default. There is a Ubuntu Community page explaining how to deal with it:
I found it very convenient to set the scroll wheel to be the right button.
Update III, Jan 2020: Increasing the resolution of the GRUB screen.
On a 4K display, the GRUB screen on start-up often is just too small to read.
This can be adjusted by editing the file
/etc/default/grub using sudo rights and uncommenting the line #GRUB_GFXMODE=
If you want to change the resolution to someting a little less coars, find out which video modes your graphics setup supports. This can be done by going to console mode from the GRUB screen (press c) and by then typing
at the prompt. This gives you the list of supported resolutions. You can then edit your
/etc/default/grub accordingly. For example, I found
quite pleasing. Once you have saved the changes to
/etc/default/grub, remember to run
See here fore details: https://askubuntu.com/questions/54067/how-do-i-safely-change-grub2-screen-resolution.
Update May 2020: Configuring touchpad gestures
It is easy to configure 3- or 4-finger gestures using the
See the link here: https://github.com/bulletmark/libinput-gestures
sudo apt-get install xdotool wmctrl sudo apt-get install libinput-tools
Then install the repository in the link above:
git clone https://github.com/bulletmark/libinput-gestures.git cd libinput-gestures sudo make install (or sudo ./libinput-gestures-setup install)
and configure autostart with
libinput-gestures-setup autostart libinput-gestures-setup start
If the baseline configuration is good for you, you’re done. Otherwise you can copy the default configuration file from
/etc/libinput-gestures.conf to the (hidden) configuration folder in your home folder:
~/.config/libinput-gestures.conf and then edit it according to the specifications in the above link.
Personally, I use vertical four-finger gestures to get the workspace overview and horizontal four-finger gestures for workspace switching. So, in
~/.config/libinput-gestures.conf I have deactivated the standard gestures and added the following lines
gesture swipe up 4 xdotool key ctrl+alt+Up
gesture swipe down 4 xdotool key ctrl+alt+Down
gesture swipe right 4 xdotool key ctrl+alt+Right
gesture swipe left 4 xdotool key ctrl+alt+Left
xdotool to be installed (
sudo apt install xdotool ) and maps the swipes to the corresponding keyboard shortcuts for switching and overviewing workspaces which then are passed on to the system.