Lets start with a quick snippet on what an executable is with ‘whatis’. It searches for whole words within the whatis database that contains short descriptions of system commands.
$ whatis ruby
irb(1), erb(1), ri(1), rdoc(1), testrb(1) - Ruby helper programs
ruby(1) - Interpreted object-oriented scripting language
Now ‘whereis’ does the job of indicating where the specified executable is.
$ whereis ruby
/usr/bin/ruby
/usr/local/bin/ruby
However, note that ‘whereis’ only “checks the standard binary directories” like /bin, /sbin, /usr/bin. To locate all different variations of the file ‘ruby’, irrespective of where they are installed, we’d use ‘locate’ like so:
$ locate ruby
/Users/ushaguduri/.rvm/ruby-1.8.7-p371/ruby
/Users/ushaguduri/.rvm/ruby-1.8.7-p371@global/ruby
/Users/ushaguduri/.rvm/ruby-1.9.3-p392/ruby
/Users/ushaguduri/.rvm/ruby-1.9.3-p392@global/ruby
/Users/ushaguduri/.rvm/ruby-2.0.0-p0/ruby
/Users/ushaguduri/.rvm/ruby-2.0.0-p0@global/ruby
/usr/bin/ruby
/usr/lib/ruby
This lists all matches in the filename including the full path for the given pattern. The database is pre-computed with `locatedb’ and re-computed periodically. An alternative is to use the ‘find’ command for more real-time results using tree search.
And finally with all the different versions installed, which one is being used when you run ‘ruby’?
$ which ruby
/Users/ushaguduri/.rvm/rubies/ruby-1.9.3-p385/bin/ruby
Since I changed the ruby version with rvm to 1.9.3, its showing the 1.9.3 executable. If I switched it back to the default, it would now look like this:
$ which ruby
/usr/bin/ruby
‘which’ looks for the executable within the user’s path. Now how the user’s path is determined will be left for another post!
]]>ushaguduri@work:Wed Feb 20 15:40:17 -> ls -al
-rw-r--r--@ 1 ushaguduri staff 899B Feb 20 15:26 test_file
got me digging deeper into the underlying File System. Notice the @ after the file permissions? That’s a way to associate metadata with a file. Its not used by the file system for any useful purpose - just to store additional information, say like the source of the file, author etc.
Bringing up the manual for the ls command describes the @ as an available option too:
-@ Display extended attribute keys and sizes in long (-l) output.
Further digging leads to the xattr command and using it on the test_file above showed interesting data:
ushaguduri@work:Wed Feb 20 15:40:41 scripts(db5-v1-me) -> xattr warmup_redis.rb
com.apple.metadata:kMDItemWhereFroms
com.apple.quarantine
The above two values in particular indicate:
com.apple.metadata:kMDItemWhereFroms: where the file was downloaded from along with a binary property list, if any
com.apple.quarantine: added by the OS the first time a file is downloaded(referring to the source of the download), so that it can ask for confirmation when the program is run (to stop malware by ensuring that the user is aware of a program wanting to execute). Once confirmed the attribute would be removed so that the program can run normally again without user confirmation.
xattr command takes several options to manipulate the metadata:
-l --> list the actual values
-d --> delete the attribute
-w --> set the attribute
For example:
ushaguduri@work:Wed Feb 20 15:41:57 scripts(db5-v1-me) -> xattr -l test_file
com.apple.metadata:kMDItemWhereFroms:
00000000 62 70 6C 69 73 74 30 30 A2 01 02 5F 10 4F 68 74 |bplist00..._.Oht|
00000010 74 70 73 3A 2F 2F 74 69 63 6B 65 74 73 2E 73 6D |tps://<website url>|
.......
000000A0 00 00 00 00 01 01 00 00 00 00 00 00 00 03 00 00 |................|
000000B0 00 00 00 00 00 00 00 00 00 00 00 00 00 9B |..............|
000000be
com.apple.quarantine: 0001;51253196;Google Chrome DEV.app;193F85B5-63F1-4A50-A83E-5713ED49D904|com.google.Chrome
ushaguduri@work:Wed Feb 20 15:41:59 scripts(db5-v1-me) -> xattr -w com.apple.metadata:kMDItemWhereFroms http://example.com test_file
Bonus: If you dont want the quarantine attribute set, you can override the defaults on Mac [if you know what you are doing ;)] as such:
defaults write com.apple.LaunchServices LSQuarantine -bool NO
]]>You start off with knowing the current branch you are working on with
$ git branch
You can get a list of all the branches using:
$ git branch -a --> all local and remote branches
$ git branch -r --> remote branches only
Creating a local branch is a simple
$ git branch myLocalBranch
Beware that the above only creates a branch, it does not check it out and set it up ready for use. To do both at once, you can use:
$ git checkout -b myLocalBranch
While working in branches is good, it is also a good practice to keep your branch as close as possible to master (if you want to avoid day long conflict resolutions) by regularly updating from master by one of the 2 following ways:
merging master into your branch. This will create a separate commit that master was merged or a custom message that you may choose in case of conflicts.
$ git merge master
rebasing against master, generally a cleaner (preferred by many) way to get the changes into your branch. This does not create a separate commit in case of conflicts.
$ git rebase master
$ git rebase --continue (once you resolve any conflicts)
Note that these are just local branches on your machine. All changes/commits reside on your machine, not on the remote server. We’ll look at remote branches soon.
]]>However, these options are not available via the System Preferences but you can change them from the Terminal. Note the inconsistencies in the boolean values: YES/NO, true/false, 1/0…I’d have expected better from Apple!
Some of the more often looked for settings are shown below. But I’ve started working on a small utility called maclets that is a compilation of settings I find myself frequently using. Once sourced it can be used as:
1
|
|
Disable icon in Applicaiton Switcher: add this to the app’s Info.plist
1 2 |
|
Switch off(YES)/on(NO) Dashboard
1
|
|
Show(true)/hide(false) desktop icons
1
|
|
Disable(false)/enable(true) character picker on long key press in Lion in favor of key repeat (this was a really annoying change)
1
|
|
Disable(true)/enable(false) Ping sidebar
1
|
|
Disable(true)/enable(true) Ping stuff in iTunes
1
|
|
But you still want to learn git? there is no better way to that than using it every single day-isnt that how you became a pro at svn in the first place? so here comes git-svn to the rescue.
Its a real simple tool that goes bi-directional between git and svn and is so well done that sometimes its just prepending git in front of svn commands!
To start a git repo from svn is as easy as
1
|
|
If your svn repo is not using the standard layout of trunk/branches/tags, you can specify what they are using -T , -b , -t like so
1
|
|
Before you start making changes though, you might want to set up the annoying ‘ignores’. Just copy over svn ignore config into git with:
1
|
|
Ensure your checkout is pointing to the right repo
1
|
|
And now you can start exploring and committing to the local git repo using git commands like
1 2 |
|
Then comes the slight difference between real git repos and git-svn repos. you most likely heard about git pushing changes. with git-svn, you’d
1
|
|
to push changes to the svn repo. and there you go-you made your first commit to svn via git-irony!!
More to come on .gitconfig, branches, cherry-pick’ing etc. but let me suggest installing bash-completion right away so you can just tab-complete the commands instead of typing out in entirety each time-all about saving those precious strokes that so risk every software engineer to carpal tunnel!
]]>One option is to run a remote command via ssh like:
1
|
|
But identifiying the server where you can find more information means running the above N times changing the host each time. For more involved tasks, it gets even more difficult. Oh and what if you accidentally missed a server and claimed that “something did not happen”! You can wrap the command in a loop and come up with a shell script (choose your own language), of course.
But pdsh simplifies all that. Its pretty much like your own wrapper, but only with more features - especially running in parallel. And fear not, its not learning another language. Even a simple run without too many options that it provides is enough to free up a ton of your time.
1
|
|
Looks so familiar, ain’t it? And all it does is parallel’ly run the date command remotely on host1 through host8.
Whoa! Thats how many keystrokes and minutes saved of your life?
Try it for yourself: Google Code
This is just a flyer ad for pdsh. More details will follow as I keep using it.
]]>Called the Canonical Name, it is similar to saying Jane Smith and Mrs.Smith are the same person. Almost all the websites have a single default CNAME for www. If you see sites where example.com works but www.example.com fails, it most likely is because of this missing CNAME which looks as simple as:
1
|
|
And all it is saying is that www.example.com is just another name for example.com, so serve the request as if it went directly to example.com. A-record on the other hand is telling you where to find Jane Smith and Mrs.Smith. In cyberspace, its the address of the machine capable of servicing the requests to example.com and looks equally simple as:
1
|
|
In essence, the minimum DNS entry when you set up a new site should look like:
1 2 |
|
The order does not matter as much, but note that each CNAME needs a minimum of 2 lookups to get to the machine to talk to.
So choose wisely, depending on how many hoops are required to finally get to the machine!
For the geek in you, the trailing . in the domain names is for the imaginary root server in the internet hierarchy and is called a FQDN notation (Fully Qualified Domain Name)
]]>So anyway, once you have acquired hosting space, the seller gives you an IP address of a physical machine where you can ‘setup your html files’. Depending on the seller(HostGator), you may have to ‘Enable SSH Access’ to open the ssh port.
Then you can type
1
|
|
from the shell and voila! you are at the all too familiar bash prompt on the hosting server!
But notice that you had to enter your crazy-incomprehensible password,that you most likely copy-paste’d from the email. If you plan on spending any significant time on this box, you will get in and out of it and will be forced to enter the password each time - SSH keys coming to the rescue! I generally use RSA for my keys from my own desktop/laptop.
1
|
|
will generate the necessary public/private keys in your .ssh directory.
You then create a .ssh directory on the new machine and add the id_rsa.pub key to a new authorized_keys file on the server. At the end, here is how the file setup looks like —> note the permissions: 700 for the .ssh directory and 644 for the authorized_keys file
1 2 |
|
And the next time you ssh to the host, you dont have to go searching for the password in your email anymore!
]]>Now where does that start? With signing up for a easy to recognize name that is associated with the entity at a Domain Registrar - one who makes sure that you own that name and no one else can get that name, like GoDaddy, 1and1, Network Solutions etc. And you pay for their services-anywhere from $10 to $100+ a year. (Prepare to shell out in the thousands if you want a name real bad, that is already taken). At this point, you only have the name to your credit, with a generic message showing at most but no where close to what you want it to actually be like, when you visit the site via a web browser.
The next step is to get some space on a machine connected to the internet-always, so that John in Bangkok or Joe in Paris can see your website anytime of the day. Called Web Hosting, there are so many choices again here - shared, dedicated, colocation - including the big names like Amazon Web Services, Google App Engine, Microsoft Azure etc.
For a simple website, I suggest something like HostGator (best of the lot if you ask me), BlueHost, 1and1, Namecheap and such, where you can get both the above mentioned services in 1 spot with an easy-to-use Control Panel to manage everything.
By the way, for the geek in you,
1
|
|
shows the information you used to sign up with the Domain Registrar and
1 2 3 |
|
show which IP address(machine) the domain is being served from.
Oh and the next time, I can point anyone, who still ponders why pay twice and are they over-charging etc, straight here..hehe
]]>Easiest to do: simply copying the file from elsewhere and svn add
Better way to do: restore the files, including the svn history.
To see which revision deleted the file, foo.file
1 2 3 |
|
Now copy the file from the previous revision using the repo url, not just the location on the local machine
1 2 3 4 5 6 |
|
And now the svn log still shows the entire history without the deletion
1 2 3 |
|
I spent enough time to figure this out that I thought best to note it here. svn was caughing with:
$ svn ls svn+ssh://myhost/myrepo
svn: To better debug SSH connection problems, remove the -q option from ‘ssh’ in the [tunnels] section of your Subversion configuration file.
svn: Network connection closed unexpectedly
The configs(~/.ssh/config and /etc/ssh_config) had nothing specific to tunnels anywhere. I could login to myhost just fine and also port forward to it without problems like before, using the long standing aliases - so nothing stood out. Preliminary instinct is to purge the known_hosts file in ~/.ssh/known_hosts and rebuild it in case the host’s address had changed - no luck. You could see what ssh is doing on a connection with ssh -v, but this is svn+ssh, isnt it? a one-liner to the rescue!
$ export SVN_SSH=”ssh -v “
and then do retry the svn command to see:
ssh: Could not resolve hostname myhost: nodename nor servname provided, or not known
Are you kidding me?! Your stupid error message ran me down a dark alley about tunnels and the problem was actually with the hostname?!
Solution:
As simple as using the FQDN for myhost with each svn command:
$ svn ls svn+ssh://fqdn.myhost.com/myrepo
The above is cumbersome if you already have aliases set up or are working with new repos. Better yet is to add the actual IP address for the host to your hosts file in /etc/hosts:
123.13.13.1 myhost
Lesson: Do not trust the error messages at face value until you get more debug info about where/who its coming from!
]]>1 2 |
|
If you expect to do
1
|
|
it sure is going to fail.
Solution: Either drop the view and recreate it:
1 2 |
|
or alter the view:
1
|
|
Lesson: Do not forget about the dependent views when you change a table’s structure!
]]>Your approach would really depend on how the migrations are set up. A simple solution no matter what(if you are using straight SQL), is to surround the actual migration sql in a stored procedure and set up continue handlers only for errors that you are already expecting like so:
1 2 3 4 5 6 7 8 9 |
|
If you are gearing up for table not existing too, then add another handler for 1146 as:
1
|
|
If you are working with views by the same name, you’d see “is not BASE TABLE”(error 1347), not table does not exist. So you’d need:
1
|
|
Essentially, its like try - catch for SQL! The above handlers, if you noticed, are not really doing anything between the BEGIN and END. But you can set up any additional SQL in there. However, make sure to declare the handlers before you actually execute the SQL which is going to “throw” these errors
Lesson: Although there are work-arounds like the above, think through why the purpose of “migrations” is defeated in the first place and if the process(whatever triggered the need for altering the table) should even be part of a migration when you are manually meddling with the database.
]]>Now to the mystery behind dates transferred between the front end(client browser) getting changed when coming back from the backend (Java)…
If you think creating Date objects is so simple, behold! there is something called a “timezone” which gets used for generating the Dates - both by Javascript and Java. And it depends on where the machine is located - in some countries it also depends on what part of the year with the stupid and confusing “daylight savings time”.
In Javascript:
1 2 3 |
|
Similarly in Java:
1
|
|
So even if you think passing milliseconds will do the job, you are mistaken because one is using UTC and the other GMT.
Solution: There are several approaches to solving this problem, that need to be carefully evaluated:
—Duser.timezone=UTC
The problem however is that you will be swamped with UTC dates in the java land(logs, output etc) which is just too cumbersome when debugging a problem.
Lesson: Never send milliseconds since epoch back and forth, since they are interpreted differently based on where the machine is located!
]]>Element in the works: jQuery UI’s Slider
There’s nothing straightforward in Watir to change the slider and trigger the events as if the user interacted with the slider. In a normal slider implementation, that would either be the ‘change’ or ‘stop’ events on the slider. So how do you do that? Use the execute_script to directly do what you need via javascript.
1 2 3 |
|
event and ui are objects you can self-create and send to the caller since your call handler signature would’ve been
1 2 3 |
|
Now that you know the js, you can simply execute it on the watir’s ‘browser’ instance
Solution:
1
|
|
Lesson: Watir can be hacked to work even with complex and dynamic web pages ;-)
]]>First impression: Awesome!
Then we added rspec and rake when the nightmares began - more about that later perhaps.
For a simple page with a few DOM elements, its pretty straightforward. Enter a highly dynamic, complex rendering logic and the DOM access breaks down. Something even as intellectually simple as:
1
|
|
actually coughed and puffed with the watir-webdriver <0.5 versions, specifically on Firefox. But of course, there is a workaround - XPATH! We are ‘Engineers’ afterall!
1
|
|
The problem with XPath though, is that its absolute and highly specific. If you so much as change the class from
1
|
|
Xpath breaks and so do your tests.
Solution: Update to watir-webdriver 0.5.2
Lesson: Be wary of anything < 1.0
]]>