Inspirated

 
 

May 6, 2010

GNU Screen + Irssi + PuTTY for Symbian

Filed under: Blog — krkhan @ 1:28 am

A match made in heaven.

Using IRC in a reliable way has turned out to be kind of a challenge for me in the past couple of months. I had my cellphone connected 24/7 to the IRC channels I needed to idle in. Unfortunately, I don’t live in a 3G country so any voice-calls interrupted the whole thing.

I also wanted to use my laptop for IRC-ing whenever I was at home. But the inconsistent internet connection didn’t make thing any easier. There was always this lingering fear of missing important messages during one of the disconnections. I couldn’t foresee a solution which would fix all of the mentioned issues until someone recommended using Irssi along with GNU Screen.

This not only fixed every little issue I had ever had with IRC but also made full utilization of my love for all things command-line. In summary: I now have this “permanent” IRC session running at an SSH server in Lithuania. Whenever I feel like it, I can “attach” my laptop or my mobile and start using Irssi. If I receive any voice-calls, the IRC session still continues to work and I can “reattach” later on. Simply put, the session continues running even when I’m not attached to it through either device and at any time of the day I can simply connect to it and resume working through that day’s IRC activity.

Here’s a screenshot that shows me connected to the #gsoc channel on Freenode first on my laptop and then on my E71 using the same screen session:

Irssi sessions on laptop and mobile

PuTTY for Symbian is used for SSH-ing on the Nokia phone. If I ever rank top 5 situations where CLI absolutely pwns GUI in terms of efficiency and usage, this nifty setup is definitely going to make the list.

Tags: , , , , , , , , ,

July 5, 2009

The top 5 worst mistakes on command-line

Filed under: Blog — krkhan @ 10:58 pm

I could start off with an intro paragraph here but I’ll prefer keeping it sweet and simple: command-line is addictive.

For many kinds of tasks — ranging from system administration to organizing folders — I find CLI to be extraordinarily more productive than GUI clicking. For example, vim-ing through a code, if I decide I need to lookup a particular symbol in the current directory, I can quickly do a recursive grep without even taking my hands off the keyboard. Similarly, I find utilities such as mv or cp to be significantly faster than GUI file managers’ equivalent features. The learning curve is definitely steep and I also am not implying that everyone should find it equally productive but for me at least, it works like a charm.

It wouldn’t be an exaggeration to say that CLI provides a terrible amount of power at fingertips of its users. While that power is tremendous fun, it also can be a source of epic fails if not handled with caution. The fact of the matter is, as one grows accustomed to quickly doing work through text-based input, overlooking those cautions almost becomes second nature. It’s not uncommon to find a commandaholic holding his head in his hands while staring at the screen in disbelief. GUI does get credit for being a little more prone to accidental mistakes by consistently providing a visual view of what’s about to happen.

Moving on from the ill-starred mischief that I posted about last week, I thought I should compile a list of all time worst incidents of me cursing my fingers for being so familiar with the CLI. Here they are:

  1. Ctrl-C

    Sometimes, I blame Christopher Sholes for putting the Z and C keys so close on the keyboard. ‘Nuff said.

  2. Deleting the wrong partition in parted

    (parted) help rm                                                          
      rm NUMBER                                delete partition NUMBER
    
    	NUMBER is the partition number used by Linux.  On MS-DOS disk labels,
            the primary partitions number from 1 to 4, logical partitions from 5
            onwards.

    If you’re wondering why deleting a partition is placed so low on the list, the answer is TestDisk. Mere seconds after I deleted my primary partition containing all my data, I stopped all activities, booted into a rescue mode and used the God-sent utility to restructure my partition table like before with a cumulative data-loss of 0%.

  3. e2fscking e2fucking a mounted file-system

    [root@orthanc ~]# e2fsck /dev/sda2
    e2fsck 1.41.4 (27-Jan-2009)
    /dev/sda2 is mounted.  
    
    WARNING!!!  Running e2fsck on a mounted filesystem may cause
    SEVERE filesystem damage.
    
    Do you really want to continue (y/n)? 

    See that SHOUTING WARNING? I did too. But back then, I ignored it as casually as anyone ignores licensing agreements. Needless to say, the results weren’t as inconsequential as clicking “I accept” and moving on without a hint of doing something legally binding.

  4. rm -rfing the wrong directory

           -f, --force
                  ignore nonexistent files, never prompt
    
           -r, -R, --recursive
                  remove directories and their contents recursively
    

    If the last tool had the F-word in its title as the warning, this one should be read as rm --recursive-f***. The H-bomb of command-line tools, once you detonate it on a directory you didn’t mean to set it upon, even CTRL-C won’t be able keep you in one piece because of rm‘s ruthless speed and efficiency. The only ray of hope is ext3grep, but depending on numerous factors (partition structure, number of files, file types, alignment of stars etc.) your recovery prospects would range anywhere from ±100% to ±100%. You read that right.

  5. mkfsing the wrong partition

           mkfs  is  used to build a Linux file system on a device, usually a hard
           disk partition.  filesys is either the device  name  (e.g.   /dev/hda1,
           /dev/sdb2).   blocks  is  the  number of blocks to be used for the file
           system.

    The granddaddy of all command-line fuckups. If you have confused /dev/sdb for /dev/sda (an easy slip up — as I learned the hard way), it’s time to move on. Sure, you will find people selling tools for recovering data from formatted Ext3 partitions; expecting those tools to work would be a lot like expecting the King of Pop to miraculously pop up from his coffin on Tuesday and perform a ground-breaking reenactment of 83’s Motown performance.

“Blessed are the forgetful; for they get the better even of their blunder.” — Friedrich Nietzsche

Tags: , , , , , ,

March 1, 2009

Top Five Improved Open-Source Projects

Filed under: Blog — krkhan @ 3:55 pm

“Evolution is God’s way of issuing upgrades.”

It’s a wonderful age to live in as an open-source enthusiast. The warm feeling is especially accentuated in one’s mind after recalling countless hours of hair-pulling trying to make that goddamned VGA monitor work with Red Hat Linux 6. Software for GNU/Linux has improved at an exponential rate. There are still plenty which lack the user-friendliness and all, but technologically the overall rate of improvement has been nothing short of astounding.

Ask any newcomer to the GNU/Linux world about their favorite open-source projects, and there’s a strong likelihood that the answer will be one of the “prominent big-guns”; the likes of Compiz Fusion, Firefox, KDE or Gnome. Ask any veteran the same question and you’re much more likely to get a diverse stock of answers ranging from Vim to Anjuta or probably even some obscure Window Manager like Fluxbox. The opinion about the most “improved” projects would thus be highly polarizing. Still and all, there are few projects which have eased my life substantially with their progress. To compliment the ones that have almost made me kiss virtual bits of code at one point or other, I’ve decided to choose the top five:

  1. recordMyDesktop
    The Dark Ages: Recording a video of an X session was nothing less than a nightmare. With sound, all the more so. The popular method was to run a VNC server and then use a tool such as vnc2swf to capture the footage.
    The Messiah: Once you install recordMyDesktop and one of its GUI frontends, recording becomes as easy as launching it and selecting “Record”. Really, you don’t have to use multiple software now for doing something as simple as that.

  2. NetworkManager
    The Dark Ages: You went to your workplace, geared up your Linux distribution and tried get some connectivity and to your utter horror, the Wireless network used WPA encryption for passphrase. wpa_supplicant was the command-line utility you could’ve used for connecting to such networks after hours of tinkering around, but it sadly wouldn’t have prevented you from getting fired because the execs weren’t that much amicable with open-source evangelism in the first place.
    The Messiah: Red Hat, for all the criticisms it receives for RHEL, is still the caring patron figure for desperate Linux users crying out for help. Hence, it’s no co-incidence that this project as well as next two on the list were initiated by the same company. NetworkManager makes mobile connectivity as peachy as it could’ve been. You spend a few days with NM on your notebook and it starts choosing the best network for you wherever you go, that too with least possible intrusion in your workflow.

  3. SELinux Troubleshooter
    The Dark Ages: In this particular case, the dark ages don’t belong to that much a distant past since the cause of all the mess was also a recent innovation. Security Enhanced Linux, while obstinately preached by Red Hat and enabled by default on its shipped operating systems, was unanimously loathed by all system administrators who had at one point or other given up their hopes and had disabled it completely on their networks. The error messages it churned out on regular bases were not only cryptic, but also critically hampered regular everyday usage of their host operating systems.
    The Messiah: With improved default policies, the situation was somewhat resolved for general user. Nevertheless, irregularities still kept popping up occasionally and hence came SELinux Troubleshooter to the rescue. For every cryptic denial that SELinux now pops up, the Troubleshooter will analyze it and even suggest workarounds for them so that you don’t have to manually mess with policy modules every time something perfectly legitimate starts getting labeled as “unauthorized” access.

  4. PulseAudio
    The Dark Ages: The music player was playing a song and you tried having a voice-call or playing another video = epic fail. The audio device was usable by only one application at a time. In fact, sound was the Achilles’ heel for default setups of pretty much every Linux distribution that existed.
    The Messiah: Playing a soundtrack in one application with volume tuned to max and having a video run in another with volume at half is no longer a fantasy. And no, ESD doesn’t even come close to PulseAudio in “seamless” multiplexing of such sounds. If you want more, Pulse can combine multiple soundcards into one and also — hold your breath — redirect audio streams to different hardware on the fly.

  5. TrueCrypt
    The Dark Ages: Disk encryption had been an ultra-geek thing for quite a while, especially on Linux. Software that provided such features needed to have modules compiled manually and loaded into the running kernel which opened up a whole plethora of compatibility issues which almost always made newcomers decide against the whole idea per se.
    The Messiah: God bless the developer who had the idea of using FUSE in TrueCrypt for mounting encrypted containers. As a consequence, once a user has installed TrueCrypt, the whole thing doesn’t need to be recompiled again from time to time with updated kernels. Also, thanks to wxWidgets, the GUI has drastically improved too; making it easier for even Linux newbies to utilize disk encryption.

Fortunately, unlike commercial operating systems, GNU/Linux users don’t have to wait for decades before seeing actual new “innovations” in action. Who knows, maybe next year we’ll have LOLPython topping my list. Anything’s probable.

Tags: , , , , , , , , , , ,

February 7, 2007

Will GPLv3 mean the demise of collaboration between free and open source software?

Filed under: Blog — krkhan @ 2:16 am

Nowadays, the general perception of media about open source is that of an efficient development model which is rapidly gaining user base. I also believed that open source has a bright future, but after I read Linux.com’s report on the rumor about Free Software Foundation trying to lock down Novell from selling its Linux based distribution, darker prospects started looming in my mind.

RMS and FSF care more about ideology than technicalities, and that’s what sets free software apart from open-source software. Now consider a situation where RMS tries to include clauses in GPLv3 which do prevent Novell from selling Linux. Things will continue to be fine for quite a while, as the kernel developers aren’t big fans of GPLv3 themselves. However, they will really escalate if FSF decides to release GNU’s toolchain and coreutils under GPLv3. Novell will be forced to fork the v2 versions, and we’ll be left with an open war declaration of open source enthusiasts against free software evangelists.

Of course, I may be just being paranoid about free software, but I just don’t see why people don’t like the new anti-DRM clauses that are being proposed for GPLv3. Here’s what v3 says about DRM:

The Corresponding Source also includes any encryption or authorization keys necessary to install and/or execute modified versions from source code in the recommended or principal context of use, such that they can implement all the same functionality in the same range of circumstances. (For instance, if the work is a DVD player and can play certain DVDs, it must be possible for modified versions to play those DVDs. If the work communicates with an online service, it must be possible for modified versions to communicate with the same online service in the same way such that the service cannot distinguish.) A key need not be included in cases where use of the work normally implies the user already has the key and can read and copy it, as in privacy applications where users generate their own keys. However, the fact that a key is generated based on the object code of the work or is present in hardware that limits its use does not alter the requirement to include it in the Corresponding Source.

The motivation of stopping your code from being used to restrict other people’s freedom was supposed to be the primary incentive for authors using GPL for their code and the clause mentioned above only tries to further restrict the restriction of consumers’ freedom. The anti-anti-DRM clause people point out that if you had a hardware like TiVo which runs only particular versions of Linux (using cryptographically signed keys), this clause will mean a clear violation for the hardware manufacturer. The problem here, again is that these people don’t share the ideology of free software, and consequentially don’t see any freedom-restriction issues with TiVo-like products. If I buy some hardware, the choice of code that would be running on it should be entirely mine — That’s freedom.

It’s a pity to finally see the dreaded clash of ideologies between the leading figures of free and open source software movements. If the conflict isn’t resolved in a healthy manner, both movements will be once again left behind the proprietary software within a few years as their respective successes owe themselves largely to their collaborative natures.

Tags: , , , , , , , , ,