Sunday, September 21, 2014

Current working directory of a process

Recently, I am dealing with a lot of automation scripts and running automation jobs. Sometimes, it is required for me to kill my jobs to give way to other processes. :(

Anyway, I need to continue my automation job after the "higher priority" processes completed. Thus, the working directory of my process is EXTREMEly important.

So, before killing the job, I have already gotten the pid. To get the working directory, use pwdx command with the pid will do.

pwdx <pid>

Example:

pwdx 1234

For further information or more ways to do this, check out this page: http://www.cyberciti.biz/tips/linux-report-current-working-directory-of-process.html


Thursday, August 28, 2014

Usage of the 5 most important synthesis modules

Hello there. I am Jonie Lim from Malaysia. This is the final assignment from Introduction to Music Production on Coursera. For this week, I am choosing the topic of explain the usage of the 5 most important synthesis modules.

Let me further break these 5 modules into 2 categories, the primary modulation and the secondary modulation. The oscillator, filter and amplifier are of primary modulation. They work directly on the sound that we will hear. The secondary modulation includes the LFO and envelope, which will modulate the primary modulation during the audio signal manipulations.

Primary modulation


Oscillator is also known as VCO, voltage controlled oscillator. The main functino of this module is to create the sound. It creates the sound with a timbre based on the waveform selected in the synthesizer. Sine wave is normally giving a single frequency range, triangle and square waveform will have the fundamental plus the odd harmonics, while sawtooth waveform will gives fundamental plus odd and even harmonics in the timbre.

Filter, or also known as VCF (voltage controlled filter), is used to shape the spectrum of the generated sound. This would determine how the generated audio signal sound, for example to sound like a tuba or a flute. This module often under the control of an envelope or LFO. The filter can be of low pass, high pass, band pass, notch or comb filter.

Acoustic filters.svg
"Acoustic filters" by Acoustic_filters.png: Mike.lifeguard
derivative work:  ¡X Mike.lifeguard - Acoustic_filters.png. Licensed under Public domain via Wikimedia Commons.

Amplifier, or VCA (voltage controlled amplifier, shape the volume of the sound signal. It amplifies or attenuates the signal before it is passed to the external. The gain of VCA can be controlled by LFO or envelop generator too.

Secondary modulation


LFO, low frequency oscillator, is used to modulate the sound with a very low frequency signal, which is normally out of the human audible range, 0Hz - 20 Hz. Similar to oscillator of the primary modulation of a synthesizer, the signal can be of any waveform. It can create vibrato effect on the sound.

Envelope provides the envelope modulation to shape the volume of the produced note. The envelope is formed by 4 parameters, attack, decay, sustain and release. All of these parameters are in time domain, except the sustain is based on the amplitude.

ADSR parameter.svg
"ADSR parameter". Licensed under CC BY-SA 3.0 via Wikimedia Commons.

These are the 5 important modules in a synthesizer. Thanks for your time reading this. I am still in progress to identify each of these elements and the controls and how they work in the software that I have. Hopefully, I can identify them soon. Have a great day! :)


Thursday, August 21, 2014

Flanger vs Phaser

Hello there! I am Jonie Lim from Malaysia. This is the 5th week assignment for Introduction to Music Production on Coursera. This week topic is very tough for me, I'll try my best to explain how short delay effects, i.e. flanger and phaser function.

Firstly, as usual, let's listen to the original guitar sound which I have pre-recorded. Warning: I am not a good guitar player. :P



My workspace.


In order try to show the difference between the flanger and the phaser, I have the parameters set similarly.

Flanger settings

Phaser settings

Note that I have both set to the highest intensity to make sure the effect is significantly audible. Next, I set both speed to 0.5Hz. This is the low frequency used to modulate the signals. The flanger can have the feedback set to be inverted or normally here. Since the phaser cannot have it set inverted, thus I have both of them set to 50%, and the feedback of flanger is set towards normal feedback.

Next, let's look at some technical aspects of the 2 audio effects.

A flanger is an audio effect that mix 2 identical signals together, with one of the signal is slightly delayed, and it gradually changing across time, which is 0.5Hz in the sample here. It is normally controlled by a LFO (low-frequency oscillator). It produces notches like a comb filter if the signal is observed under frequency spectrum. The audio effect sounds more harmony and distributed evenly across time. This is due to the delay is applied to the signal equally.

Building block of a flanger

Audio effect after applying the flanger plugin:





A phaser on the other hand is a special effect depends on the DAW or the gadget maker to set the notches across the frequency spectrum. Most of the time, a phaser have only a few notches across the frequency spectrum. Similarly to flanger, it is also modulated by a LFO so that the position of the notches continuously move across the frequency spectrum. The audio effect of phaser sounds more synthetic and more significant in sweeping between left and right track on the same settings. A phaser is based on a chain of all pass filters, which all of these filters will be modulated by a low frequency delay. Thus, the output of the phaser block has a non-linear delay mixed with the original signal.

Building block for a phaser


Audio effect after applying the phaser plugin:






This week's assignment, is the toughest one so far. Did quite a number of readings and re-watched the lecture videos a few times, and find myself still digesting this big topic. Guess it would be a life-long learning process. I used flanger in one of my recordings previously, I didn't literally "know" that, until doing this assignment. The only one reason I applied that effect to the guitar track was... I played it badly, and this effect seems covered the weakness a lot. :P Anyway, you can check it out at : 碎心: http://youtu.be/C4eJvkTitdc

Thursday, August 14, 2014

Noise Gate

Hi! I am Jonie Lim from Malaysia. In this week 4 assignment, I would like to dive in the noise gate plugin. I have a very simple "studio" setup, a MacBook, itself. I don't have a mic or special room to do recording, thus accoustic noise is my #1 concern in recording. In this blog post, I'll be using one of my previous recordings to demonstrate how to use the noise gate. I am using GarageBand 5 for this assignment.




This is the audio waveform from the small purple recording that is splited and copied out from the original recording. To hear the contrast, the effects or plugin that I have put on that track are removed.

This is the original recording sounds like. You hear a "click" sound at 0:04 going into 0:05, and also notice the significant noise right after the singing phrase.







Look at the audio waveform, the "click" sound is highlighted. To get rid of the noise and also the unintentional "click" sound, we can use a noise gate plugin to filter them. As we know, our recording sound is normally much louder or has the higher amplitude than the noise. In this case, we can set the noise gate at the threshold where it is higher than the noise, but below the sound of interest, which can be singing or instrument playing.

Due to there is no other view in GarageBand 5 to adjust the threshold accordingly, thus this is required to do manually by listening. In this example, I set the threshold to -36dB.




Below is the mixed output.



OK, that's all for this post. I hope you can hear the difference between the original, untouched recording and also the effect of the noise gate plugin to that recording.

In case you are interested in listening the whole song, you can visit the YouTube page at : https://www.youtube.com/watch?v=95Tn1iiBfHk

Note, this was recorded prior of this class, and noise gate was not applied. However, it might not so significant, as the singing and instruments are louder than the noise, a lot, as a whole. I normally try-and-error to apply the preset plugin to get the sound effect that I wanted. Next time, I'll try to use the noise gate to eliminate the unwanted noise. :P

Hope you enjoy reading! Cheers!


Thursday, August 7, 2014

Audio effects

Hi! I am Jonie Lim from Malaysia. In this blog post, I would like to talk about the audio effects. This would be my 3rd week assignment for Introduction to Music Production.

Basically, there are 3 major categories of audio effects, which related to the 3 properties of sound. I am going to take a plugin for each category, and explain how these plugins work in GarageBand 5.

I have recorded a singing on 4 notes and export to mp3 with no audio effect being applied.


You can listen it here.



1. Dynamic Effects

Dynamic effects play with the amplitude of the sound. One of the plugins that put manipulate amplitute of the sound is compressors. Basically, a compressor would "squeeze" an audio signal when it rises above a specific threshold level. In GarageBand, it would then take the setting of ratio for the strength of the compression, attack setting for when the compressor should react to the signal breaching the threshold, and then the gain setting to set the loudness of the effect to the audio signal. See the screenshot below.


I put on the effect to the audio earlier and mixed to mp3 again. Here's what you will get by applying the above setting to the original audio signal.



To me, the effect is very mild. One way to "observe" the differences, is by the visualization of the mp3 file playing on the media player.

2. Delay Effects

Delay effects play with the propagation property of the sound. It provide the sense of space of the audio signal in the mix. One of the plugins for this is flanger. A flanger mixes a delayed same audio signal and the original signal together. Here's the setting I have applied to flanger to the original audio signal.


The intensity setting is for how much the flanger effect applied to the mixed. The speed setting is for how much latency the "new" audio signal will be mixed to the original signal. The feedback setting is for how much feedback is applied to the modulated signal.

Observed the difference of the following mixed audio with the original audio.



3. Filter Effects

Filter effects control the impact on the timbre/frequency of the sound. One of the plugins for this is Band Pass filter. It passes frequencies within a certain range and attenuates frequencies that is outside of that range. Let's see the example that I applied to the original recording.


I set the bandpass filter to ~2.7kHz, and let the attenuations happen gradually around that frequency.

Hear the impact of applying this plugin to the original recording. You may find out the "sound" has been "cut" and feel incomplete. It sounds like the audio signal has been transmitted to the loudspeakers for output. :)



In summary, I have prepared the table below.

Audio effect Sound property Audio plugins Effect/Usage
Dynamic Effect Amplitude Compressor Squeeze the audio signal when it is above a threshold value, which would reduce the signal dynamic range. This will results in a "flattening" of the sound.
Limiters Boost the signal before the threshold and limit the audio signals that is out of the threshold.
Expanders Amplify or attenuate the audio signal. Stereo expander can create a surround effects which can enrich the stereo field of a mono signal by creating spatial effects.
Gates Only allow a signal to be heard if it exceed a specific volume threshold. This can be used to silence a constant low noise floor from a signal.
Delay Effect Propagation Reverbs Apply echo effect to give a sense of the room size effect.
Delays Apply delay effect on the original signal.
Phasers Create peaks and troughs in the frequency spectrum and these effects are vary over time to create a sweeping effect on the audio signal.
Flangers Mix a delayed signal to the original signal.
Choruses Mix different pitched of the original signal.
Filter Effects Timbre (Frequency) High pass filter Passes high frequency signals and attenuates frequencies lower than the cutoff value.
Low pass filter Passes low frequency signals and attenuates frequencies higher than the cutoff value.
Band pass filter Passes signals for frequencies within a certain range and attenuates frequencies outside of the range.
Equalizer Boost or cut each frequency to control the frequency response characteristics.



I read the feedback from the peer evaluators, and really feel appreciated on the comments given. Hope I improved on my assignment this week. Thanks for spending time to read this. I hope you enjoy this. :)

Thursday, July 31, 2014

Add a software instrument and record MIDI using GarageBand

Hi! I am Jonie Lim from Malaysia. This is my second week assignment for Introduction to Music Production on Coursera. I am choosing the topic to record a MIDI in DAW (Digital Audio Workstation). I am using GarageBand as my DAW. I am using GarageBand 5. Yes, it is a very old version, my Macbook is currently 5 years old. :P

Firstly, I am creating a test project in GarageBand. I set the Count In as the click and countoff (this is what I understand what it is). I also turned on the Metronome so I could follow the tempo of the project that I have set, which is 120.

Setting the Count In and Metronome



Then, I created a new track selecting Software Instrument.

Creating a software instrument track

I have my MIDI controller connected to my Macbook. Got this MIDI controller last year when I have a chance to go to US. Haven't work out anything from it, yet. Thanks to this course, it help me to find ways to play with this! :D

Connecting MIDI controller to DAW

I can select which software instrument to be used from the right menu on the right of GarageBand.

Selecting software instrument

I selected the Planetarium from Synth Basics category. GarageBand will change the track name according to the selection automatically. I still can change the track name before recording. However, whenever a new software instrument selected on the same track, it will be renamed again, automatically.

Selected software instrument for the MIDI track

Finally, I selected the track that I wanted to record to, and click the record button.

Track editor : showing score view

As we know, MIDI is something like a real time score, which can be view in GarageBand right away by showing the track editor. Editing the score actually will change the note played by the software instrument.

You can view it in piano roll as well. See the following screen shot.

Track editor : showing piano roll view

To change the velocity of the note(s) in the MIDI track. Firstly, select the note(s), then go the the left pane of the track editor, modify the velocity of the selected note(s) by scroll to the left or right at the velocity control. From the illustration below, the velocity of the selected note in green is 119.

Track editor : controls

GarageBand provide the quantize function by the "Align to" control, the control at below of the Velocity control. The off to max bar indicates the percentage of quantization mentioned in the lecture.

Track editor : quantize the notes

I mixed down the recording and uploaded the file to SoundCloud. This is the audio file "generated" by the software instrument that I chose in GarageBand in this blog post.



I tried to explore GarageBand on my Macbook, but it seems it is unable to perform some of the tasks mentioned in the lectures, such as colouring the track/recording, cross-fading and etc. That is why I chose this topic to discuss for this assignment. I am thinking of upgrade it, or download Audacity to proceed with this course. Probably this would help me more with my future recordings too! ;)

Thank you for taking your time to go through this blog post. Hope you enjoy reading it, as I enjoyed preparing for this. Have a great day!

Thursday, July 24, 2014

Type and Usage of Important Studio Cables

Hi, I am Jonie Lim from Malaysia. This is an assignment from a course that I am currently taking in Coursera, Introduction to Music Production. I will be sharing about the type and usage of some of the studio cables that normally used. I wanted to do this in video format, but I guess I would do this better as a blog post.

In a more specific terminology, the word "cable" actually refer to the cord or wire that is used to connect an input device and an output device. However, what we are discussing here about the studio cables that are more focusing to the connectors that act like an interface between the cord/wire with the input/output devices.

There are 2 categories of type discussed here. One is balanced cable or unbalanced cable, the other category is analog or digital cable.



XLR cable

Xlr-connectors.jpg
XLR cable normally used for professional audio applications, such as microphones. The end points of this cable has a male and female XLR connectors, which has three pins or three holes respectively. There three contacts are for common/ground, positive and negative versions of the signal. This make the XLR cables for balanced connection. This is good for long distance audio signal transmission without signal loss. This is for transmitting analog signal.

Picture: "Xlr-connectors". Licensed under CC BY-SA 3.0 via Wikimedia Commons.



TS cable

TS 0.25inch mono plug.jpg
TS cable commonly used for connecting to guitar and keyboard for line in/out. T stands for tip, and S stands for sleeve. It is a 2-contact connector, one for common/ground signal, and the other contact is for the audio signal. It has part of its audio signal to be carried through the ground, thus it is an unbalanced cable. This is for transmitting analog signal.


Picture: "TS 0.25inch mono plug" by Mataresephotos - Own work. Licensed under CC BY 3.0 via Wikimedia Commons.



TRS cable

Audio-TRS-Mini-Plug.jpg
Compared to TS cable, TRS cable has an additional contact, R (stands for ring), which allow TRS cable to work in 2 modes. (1) As unbalanced cable for stereo audio signal connection, (2) as balanced cable for mono audio signal connection. TRS cable commonly used for PC line in/out or phone jack as for stereo signal transmission, guitar/keyboard as mono signal transmission. Again, this is for transmitting analog signal.

Picture: "Audio-TRS-Mini-Plug" by Evan-Amos - Own work. Licensed under Public domain via Wikimedia Commons.



MIDI cable

Midi ports and cable.jpg
A midi cable usually have a three or five conductors, a common/ground wire, and a balanced pair of conductors. This commonly can be found on a digital piano or a keyboard. It is also widely used for synthesizer as a MIDI controller. It is transmitting a MIDI signal, which is in digital format. This MIDI format data can be differs from what you hear from keyboard when it is played on a computer. When I used this cable, I found out the in/out label on my digital piano actually should be connected reversely to MIDI cable to work properly.

Picture: "Midi ports and cable" by :en:Pretzelpaws with a Canon EOS-10D camera. Cropped 2/9/05 using the GIMP. - en:Image:Midi_ports_and_cable.jpg. Licensed under CC BY-SA 3.0 via Wikimedia Commons.




RCA cable

Composite-cables.jpg
RCA cable is commonly used to carry both audio and video signal. This is commonly used for consumer appliances. However, this is can be the only available option for a budgeted music gear. This can be used for both analog and digital audio signal transmission. for digital audio, the cables must meet the S/PDIF specification. I do have RCA cable, but I do not have an interface to control the gain to the line level that my PC can "listen".

Picture: "Composite-cables" by Evan-Amos - Own work. Licensed under Public domain via Wikimedia Commons.



USB cable

USB-Connector-Standard.jpg
USB cable is one of the common connector used to connect to a PC. For example, one of the cable end can be of MIDI connector, and the other end is USB connector. The digital data probably doesn't required a "balanced" method to ensure no data loss, thus this is not categorized in balanced/unbalanced cable.

Picture: "USB-Connector-Standard" by Evan-Amos - Own work. Licensed under CC0 via Wikimedia Commons.


Firewire cable

FireWire800 Stecker.jpeg
Firewire cable is similar to USB cable, but it is using a different standards and interface. This is intended for high-speed communication, however, it is not that popular in computer recording compared to USB cable. Just as USB cable, Firewire cable is used to connect the audio interface to the computer. It is transmitting signal in digital format.

Picture: "FireWire800 Stecker" by --Fadi 10:48, 27. Mai 2010 (CEST). Original uploader was Fadi at de.wikipedia - Transferred from de.wikipedia; transferred to Commons by User:Wdwd using CommonsHelper.
(Original text : selbst fotografiert). Licensed under CC BY-SA 3.0 via Wikimedia Commons.



This summarize of the type and usage of important studio cables as below.

Cable Type Usage
XLR cable
  • Balanced cable
  • Analog audio
To connect to microphones and for balanced signal transfer. 
TS cable
  • Unbalanced cable
  • Analog audio
To connect to guitar/keyboard.
TRS cable
  • Can be both balanced cable or unbalanced cable
  • Analog audio
To connect to guitar/keyboard in balanced mode, and to connect to PC or handset in unbalanced mode.
MIDI cable
  • Balanced cable
  • Digital audio
To connect to keyboards/piano or a synthesizer.
RCA cable
  • Unbalanced cable
  • Can for both analog and digital audio
To connect to guitar/keyboard in balanced mode, and to connect to PC or handset in unbalanced mode.
USB cable
  • Digital signal
Mainly used to connect to a computer.
Firewire cable
  • Digital signal
Mainly used to connect to a computer.

Taken some photos on my audio cables. Here they are :

A note on the TRS cable and stereo cable here. The quarter inch TRS' usage is like what described can be used as a balanced mono or unbalanced stereo cable. The 3.5mm stereo cable usually used as an unbalanced stereo cable. Hence the name, stereo cable. :)




Hopefully this post gives an overview of the type and usage of important studio cables.

Wednesday, April 16, 2014

Disable ssh timeout on client site

I think this is the nth time I Google for it, and look for the right result that solve this problem. Never have the solution stick in my mind. Keeping this for my own reference. :)

Update /etc/ssh/ssh_config with this line :

ServerAliveInterval 300

Reference : How to disable SSH timeout

The reference said it works for Ubuntu (I personally tested) / Debian. I am setting this in CentOS, it works too. :)




Thursday, March 6, 2014

CGI - Common Gateway Interface

What is the first thing come to your mind, when you see... CGI? Me -- Perl. But it's actually more than Perl. Here's some examples on CGI in different languages and some setups on Ubuntu to get the web server running.

To setup Apache2 in Ubuntu. This will install and run the Apache2 server :

sudo apt-get install apache2

By default, the cgi directory is setup at /usr/lib/cgi-bin. Just have your cgi scripts or executable put in this directory (make sure it's executable) and it can be access via http://localhost/cgi-bin/path. This can be modified at the config file. The default config file is at /etc/apache2/sites-available/default. The section of the file that related to this looked like this :

 
 ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
 <Directory "/usr/lib/cgi-bin">
  AllowOverride None
  Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
  Order allow,deny
  Allow from all
 </Directory>


The access and error log can be found at /var/log/apache2/.

OK, now begin the example on cgi scripts/executable.

Let's start with Perl.

#!/usr/bin/perl

use CGI;

$cgi = new CGI();
print $cgi->header();
print "<b>Hello!</b>\n";


Perl has a CGI library, which you can use it to print the required header before your content to make it as a valid/recognizable header for the web server to process. Without the CGI library, the code can be as below.

#!/usr/bin/perl

print "Content-type: text/html\n\n";
print "<b>Hello!</b>\n";


"Content-type: text/html\n\n" is expected header for html document, and if it's a text document, the html can be replaced with plain.

I heard of C/C++ can be programmed for web application, but I never know how or I didn't bother to Google it. And I found out, actually it's also a form of CGI, so now I know CGI != Perl. :)

For C code :

#include <stdio.h>
int main(void) {
  printf("Content-Type: text/html\n\n");
  printf("<b>Hello</b>\n");
  return 0;
}

For C code, it needs to be compiled. You can try cc or gcc to compile it. Example :

cc -o hello.cgi hello.c

Now, let's go for shell scripts!

#!/bin/bash
echo "Content-type: text/html"
echo ""
echo "<b>Hello</b>\n"

All of these scripts are giving the same result. Make sure they all are executable. :)


References :
https://www.cs.tut.fi/~jkorpela/forms/cgic.html
http://help.cs.umn.edu/web/cgi-tutorial

Thursday, February 27, 2014

RCS - Revision Control System

I almost forgot about RCS! Keeping this for my future reference.

This revision control system required minimum setup. What I need is just simply to have RCS installed.

To install in Ubuntu :

sudo apt-get install rcs

To check-in your file :

ci filename

You will see a new file created in the same directory :

filename,v

And you'll wondering where your file is.

To keep your file "visible", add the -u switch.

ci -u filename

To make it more organized, create a RCS directory in your working directory, so the ,v file will be stored in RCS directory.

If you are using the ci without -u switch, you can use co command to checkout the file.

co filename

To check out and lock the file for edit, use the -l switch :

co -l filename

To do a diff with your changes and the last check-in version :

rcsdiff filename

To see the log and change summary of each check-in :

rlog filename

Some screenshots for the above mentioned commands.