This is the time for me to start looking for a new position. The first thing to do when you start this process, is to update your CV and resume. Although a lot of people (including myself until recently) think that these are two identical documents, in reality they are not. Resume is a short (max 3 pages) concise summary of your experience and achievements that show how you fit the future position. HR people screen tons of documents everyday, and they want to know if a person fits the position from the first glance. In CV, you describe your experience in details, mentioning all the projects that you have participated in, your contributions, what technologies have been used, etc. Moreover, if you have an academic experience, you list there all your publications and academic achievements. As a result, your CV could be quite long especially if you have huge experience, a lot of publications or both. Thus, if you have been chosen the interviewers may understand your experience in details.
Still, both these documents may share the same sections like education and working experience. In order to follow the DRY (don’t repeat yourself) principle and unify the style of my CV and resume, this time I have made them using the same LaTeX template called moderncv. In this article, I explain how I maintain these two documents together and list the modifications that I have made.
Nowadays, it is a quite popular to store semi-structured information using JSON format. Indeed, JSON files have quite simple structure and can be easily read by human beings. JSON syntax allows one to represent complex dependencies in data and avoid data duplication. Moreover, all modern programming languages have libraries that facilitate JSON parsing and storing data into this format. Not surprisingly, JSON is extensively used to return data in Application Programming Interfaces (APIs) .
At the same time, data analysts prefer to deal with structured data represented in the form of series and dataframes. Unfortunately, transforming JSON data into structured format is not that straightforward. Previously, I preferred to develop code to parse manually complex JSON files and create a pandas dataframe from the parsed data. However, recently I have discovered a pandas function called
json_normalize that saved me some time in my projects. In this article, I explain how you can start using it in your projects.
Until recently, I used tmux occasionally, only if I had to run some experiments on a remote server and later see the results of the execution. Basically, I used it only as a mean to execute commands in the background. If I needed to run several commands on a remote server parallelly, I used to open several terminals, connect each of them to the remote host and then switch between them.
Recently, I started working with a remote server through
ssh more often and the routine, I used to, became very operation consuming. So, to improve my effectiveness, I spend several hours reading articles, watching videos and trainings how to use tmux. This article combines the knowledge I have acquired. It is also a crib for me if I forget something in the future.
Several weeks ago during a compilation process, I noticed that my laptop became very hot under my palms. At first, I did not pay any attention to this, however, when it became uncomfortable to work I started to worry. My first thought was that the laptop got dusted and cannot remove the heat effectively. But then I noticed that I did not hear the fan noise when the load on the CPU increases, and I decided that my cooler is either broken or blocked. I was almost about to start disassembling my laptop, but luckily I decided to check the temperature using Linux utilities. There I found out that, despite I feel the laptop being hot, the sensor [
temp1] showed that the CPU temperature was normal (showing all the time the temperature of 45°C). This looked suspicious, and I checked other sensors measurements and found out that the [
coretemp-isa-0000] sensors showed more correct temperature values, which in addition reacted on load increase. In this article, I want to describe, how I forced my system to react also on the values from these additional sensors and cooled down my laptop.
Currently besides all other activities, I am developing a habit of programming following Test Driven Development (TDD) methodology. This is a perfect time because I continue to explore Rust, a new programming language to me. Moreover, this language encourages you to cultivate this best practice by providing great documentation and well-thought ecosystem.
In our programs, we often face with exceptional situations (e.g., lack of space when you try to write a file, or absence of a resource), and we need to handle them. If you follow the TDD approach, you need to ensure that these exceptional situations are also properly covered in your tests. Id est, you have to develop tests that reproduce these exceptional situations and make sure that your code detect and handle them correctly. In this post, I want to discuss how to test exceptional situations in Rust.
I like to work using an adapted Pomodoro technique, therefore I added a timer widget to my desktop (I use Kubuntu as my operating system). Unfortunately, in Kubuntu by default when the timer ends, there is no sound notification about this event. Moreover, the set of predefined timer intervals does not fit my needs. In this short post, I explain how to make the timer widget more comfortable.
Today, I want to note down my thoughts on closures. Closures are important in Rust, because they are extensively used in iterator adapters paramount in development highly performant programs. However, to my point of view this topic is not well-covered in The Book. This may be a reason why it is considered among the most difficult parts of the language. In this post, I will try to shed more light on it, hopefully making it more clear to Rust learners. Note that I am still a novice to the language, and my understanding may not be fully correct.
When you develop your first binary application using new language, the first issue that you face is how to organize your code so that you can easily extend it in the future. There is a good example in The Book on how to organize your code, parse and process command line arguments by yourself. However, in real world you would use a library to parse command line arguments, which most probably would be the clap library in case of Rust. In this article, I describe my template for creating a CLI (command line interface) application.
In my previous articles (“Clearing Output Data in Jupyter Notebooks using Pre-commit Framework” and “Clearing Output Data in Jupyter Notebooks using a Bash Script Hook”), I described how to clear output data in Jupyter notebooks using the pre-commit framework and the git hook script correspondingly. Both these approaches are usable and could be applied for your project repositories. However, recently I have found the third way how to clear Jupyter notebook output cells that seems to me more clear and easier to implement. In this article, I describe my last findings.
In my previous article, I described why you may need to clear output data in your Jupyter notebooks. As at the time I participated in a pre-sail project for AI Superior, we required a quick solution to achieve this goal. That is why I used Python-based pre-commit framework to create a pipeline to clear output data. However, this approach requires you to install additional Python package into your system, that might not be always possible. Therefore, at the time I decided that I would implement this approach as a pure Bash script. Recently, I have found some spare time and decided to dig deeper into this topic. As a result of my explorations, I developed a git pre-commit hook that clears Jupyter output cells and wrote this article describing it. If you are an adept of ‘show me the code’ and do not want to read the article, you can find the final script here.