Skip navigation

Tag Archives: command-line

I finally got tired of manually typing out all of the characters in the branch names that JIRA was creating for my projects on the M1 Macbook Pro that work has provided for me.

Solution is really simple and I’m kicking myself for not having done it earlier now.

Inside a Zshell terminal, type the following two lines:

echo 'autoload -Uz compinit && compinit' >> ~/.zshrc
source ~/.zshrc

That will enable tab autocomplete and reload the shell so the change takes effect immediately.

Source

This weekend, I made an interesting (re)discovery. I was digging though some old emails and found an email with details about how an acquaintance and I were creating a centralized git workflow.

I figured out how to keep my project in sync with yours.
I use git on the command-line, but I imagine it should translate to a GUI tool pretty easily

git add remote upstream
git fetch upstream
git stash //I had uncommitted changes that I had to get off the stack first
git rebase upstream/master
git stash apply //to reapply my changes, I had to manual merge a few issues.

Anyway, thought I’d share with you in case you get other contributors to your IDE.

Later,

Ben

This was highly relevant for me personally, as we have been discussing the benefits of SVN vs Git at work lately. SVN’s centralized structure is simple enough to understand that everyone groks it from the jump. But keeping Git in sync, with its distributed workflows, is a much more convoluted process for new users.

With the above setup to a git instance, it allows a user to maintain local changes, while still being able to keep up-to-date with the latest pushed changes. Being able to rebase, based on the most recent changes to the upstream repo (aka, the centralized shared home of the project) makes keeping everyone in a centralized git workflow so much easier.

If you add ‘–‘ to your git command, git will know not to process anything after the double hyphen as a command-line argument.
This was useful to know for the situation I just found myself in. I had accidentally cloned a repo into a directory called “–force”. I had placed the –force argument in the wrong sequence in the clone command. But when I went to remove it from git, I found I could not. Every ‘git rm --force‘ command was throwing up the usage guide for the rm command; git thought I was passing in ‘–force’ as an argument. I found out about using the ‘–‘ argument with git and ‘git rm -- --force‘ got rid of my bad directory.

I’ve been working with a colleague on getting a logo designed for the game company he has recently started. The designer that was hired used a font called “Exo” in the logo. The source file was given to us in a .svg file. And in order to view the logo with the correct font, I first had to install the Exo fontface on my Ubuntu installation.
Here’s how I did it.
First, find the fontface for download somewhere. We ended up finding it on fontsquirrel.com and downloading it in .zip format. Next you will need to create a .fonts folder under your /home/<user-name>/ folder. Inside this .fonts folder, create an ‘exo’ folder to store the actual OTF files. Extract the files from the exo.zip file into /home/<user-name>/.fonts/exo. Finally, run the following command:
sudo fc-cache -f -v

You should see all the font caches on your machine getting refreshed, including the new .fonts folder.
After this, I opened our logo.svg file in Inkscape and the correct font was applied.

Easy as that!

So I’m investigating an Eclipse error I’m getting on my dev laptop and need to search for a bit of text in a file that may be in a hidden file somewhere in my workspace. I don’t trust Eclipse’s global File search (Ctrl + H) to be thorough enough for my needs. I need to do this via the OS.
The “Grep” command is the solution.

It is used like so:


sudo grep ""

For example:

sudo grep "IOException" *.log

Now, my particular issue meant that I didn’t know which folder the file would be located in. To get around that, I used the recursive argument or ‘-r’:

sudo grep -r "IOException" *.log

This loops through all the files in the current directory, as well as sub-directories of the current directory.
At least that’s how I understand it to work. Feel free to correct me if that’s not the case.

I am not sure how long Github has offered their Pages service, but I only recently discovered it. And, as I have been trying to improve my workflow, I found the idea of deploying my site with a simple “git push” very enticing.
I experimented briefly with Pages and decided to go with Github for my webpage hosting. However, I quickly realized I had a problem. I had an existing Github repo that housed the code for bencarson.net. I also had a Github repo in ben-carson.github.io that I had created for my Pages experimentation. I didn’t want to lose the history of my original website project. And I also didn’t really want to just copy-and-paste my original site’s files over my Pages repo, that just felt too inelegant. What I really wanted was to combine these two distinct git projects and their histories into one project.
My Git-fu is still pretty weak, so I took to the web for help. My initial searches provided results on merging subtrees and modules and whatnot. More along the lines of keeping a library that a project uses up-to-date, rather than a one-time project meld. Way overkill for my needs. Then I found this post. It was close to exactly what I was looking for.
For the sake of clarity, I’ll include just the (DOS) commands I ran for this process:

C:\> mkdir bencarson-website

C:\> cd bencarson-website

C:\bencarson-website> git init
Initialized empty Git repository in c:/dev/workspace/blog-post/.git/

C:\bencarson-website> dir > deleteme.txt

C:\bencarson-website> git add .

C:\bencarson-website> git commit -m “Initial commit”

C:\bencarson-website> git remote add bc-gh-io-remote https://ben-carson@github.com/ben-carson/ben-carson.github.io.git

C:\bencarson-website> get fetch bc-gh-io-remote
warning: no common commits
remote: Counting objects: 73, done.
remote: Compressing objects: 100% (55/55), done.
remote: Total 73 (delta 11), reused 69 (delta 10)
Unpacking objects: 100% (73/73), done.
From https://github.com/ben-carson/ben-carson.github.io
* [new branch] master -> bc-gh-io-remote/master

C:\bencarson-website> git merge bc-gh-io-remote/master

C:\bencarson-website> git rm deleteme.txt
rm ‘deleteme.txt’

C:\bencarson-website> git commit -m “removing garbage file”
[master 05b4839] removing garbage file
1 file changed, 10 deletions(-)
delete mode 100644 deleteme.txt

#combine the ‘remote add’ and ‘fetch’ commands with the ‘-f’ parameter
C:\bencarson-website> git remote add -f bc-net-remote https://ben-carson@github.com/ben-carson/bencarson.net.git
Updating bc-net-remote
From https://github.com/ben-carson/bencarson.net
* [new branch] master -> bc-net-remote/master

C:\bencarson-website> git merge bc-net-remote/master

C:\bencarson-website> git remote remove bc-net-remote

C:\bencarson-website> git remote rename bc-gh-io-remote origin

C:\bencarson-website> git push origin master

After this, my new, history-merged, site was pushed up to ben-carson.github.io and is available[Edit 03.10.15: removed all my github stuff]! Super easy, once you know how to do it.

Somehow, I broke my Fedora installation this weekend.

After having a great time getting started on my Gravatar project this past Saturday, I settled in for some Labor Day programming yesterday. However, as soon as I tried to open Eclipse, I was given an error message telling me that the command ‘java’ was not in the PATH variable (or something like that)

I have no idea what the hell happened between Saturday and Monday, my laptop was shutdown between those times.

At first, I thought it might be some leftover quirks from the time that Fedora shut itself down when my battery died (the fact that it didn’t warn me is pretty crappy). I ran some fsck’ing on a few mounts. But I don’t know what I’m doing there, so I wasn’t surprised when that didn’t work.

I decided to go for the obvious and just fix the PATH variable. This required me to figure out where this damn variable is in Linux. I also wanted to uninstall Oracle’s JDK 7 as I thought that might have been a part of the problem. First, linuxquestions.org helped me uninstall all java packages with the ‘yum upgrade -y’. I don’t know what it does, but it set me back to using openjdk7. Then a stackoverflow post helped immensely by showing me where my PATH variable was and how to alter it.

After I added the openJDK filepath to my PATH variable, I logged out and logged back in. ‘java’ at the command line works again and Eclipse opens without a problem.

Issue resolved.

I have begun working with Codeigniter 2.1.0 and am trying to wrap head around what exactly is required for an app to be deployed onto a server.

This required me to move some files and folders around. Unfortunately I set up my XAMPP installation as ‘root’, so I can’t use the lovely file manager-type windows in Mint that I’m used to, to copy files from one directory to another. I keep getting “Permission denied” errors.

I vaguely remember something about chmod from my college days. It has something to do about octal code of a number corresponds to write, read or read/write permissions for different “things”<–what the owner/group/other user permissions were in my head before this experience.

I broke out my linux pocket guide, and found that there are actually three relevant commands, chown, chgroup, and chmod. Since I am working with files that are in my “jooky” directory and are owned by ‘root’, I figured that changing the owner would fix my problem. I looked up the chown command. The pocket guide gave me a start, but not enough detail on the command to tell me how to recursively change the owner of a folder and all subitems in that folder. Quick search with my fav search engine turned up this gem.

Slapped ‘sudo chown jooky:jooky -vR /home/jooky/workspace-web/website’ on the command line and got to watch sweet line-on-line action of all of the files in my Eclipse workspace being reassigned ownership to ‘jooky’.

Now that that’s complete, I am able to move files around my Eclipse environment without a problem.

So after I stopped the FTP upload from my inappropriate account , I needed to delete the 726 megs of “illegal” uploads.
I remembered the ‘rmdir’ command, but that only works on empty directories. I remember this fact as soon as I was given the ‘rmdir: failed to remove `Pictures/’: Directory not empty” response on the command line.
I once again turned to my buddy Duckduckgo and found the answer.
I needed use the remove command with the recursive parameter ‘-r’.
About two seconds after I entered the ‘rm -r Pictures/’ command, the folder and all of its contents were gone. All four hours of uploading gone in the blink of an eye.
It is two weeks later and I am at about 10.7 GB uploated of my 25Gb total. The 512kbps upload of my AT&T DSL is really not doing it for me right now. Still not enough motivation to switch to Charter though. Maybe in 5 more months when they try to jack up my rates again.

So I decided to take advantage of the free 50Gb storage space that DreamHost offers all of its webhosting customers today.
Unfortunately, I didn’t understand that I needed to use a special user account that Dreamhost provides for me for this activity. So I am currently uploading approximately 25Gb of pictures to my account on just a normal user account. I may have my account revoked by the end of the day, who knows.
The situation isn’t too dire just yet, as evidenced by the linux command I just learned.
I logged in to my account from work to see how much data had been uploaded so far. I found the folder, but had no idea how to actually see any detail on the folder’s size. Performing a quick DuckDuckGo search has taught me to use the ‘du’ command to see the size of a folder.
So I ran ‘du -sh /Pictures’ and was greeted with the following response:
616M    Pictures/
The -s parameter means to ‘summarize’. This provides only the disk usage of the specified argument, in my case, the size of the ‘Pictures’ folder. Without this parameter, I would be given an itemized list of the disk size of all of the folders within this folder.

The -h paramenter means to make the output ‘human-readable’. Namely, converting the raw ‘616543’ kilobyte size to ‘616M’. Very handy since I hate mental math.

So its not too bad now, but I gotta get home and stop this from getting worse. Good thing its lunch time.
I can stop the process and start uploading using the appropriate account now.

Well I’m off to stop a runaway FTP upload. Times like these that I really wish I had set up my laptop for remote contol…