Archive for the 'Technology issues' Category

Published by Kenneth on 12 Jan 2010

Follow up: Movable Type and 1&1 Internet

After a couple more e-mails and some more finagling on the back-end server, I have Movable Type installed and running. Now comes the difficult part of actually administering it.

One thing that is definitely evident with playing with it thus far is that Movable Type is intended to run a web site. It’s a lot more than just a blogging platform, and in case the thought has crossed your mind, I may consider switching to using it to run the Colony West web site, assuming I can find plugins for everything else I will be needing, along with getting it stable…

Actually, if anything, I’ll be more likely to switch my personal web site over to Movable Type before I change the Colony West web site over. At least with my personal site, I won’t have a lot to move – just some existing blog posts to migrate for now. Right now it’s powered by Wordpress, the same engine powering this blog.

Now I will say that there are still some quirks – I’m not entirely free of HTTP 500 errors, but they’re happening quick enough that I know script timing is not the issue, so I’ve still got some troubleshooting to do.

Published by Kenneth on 11 Jan 2010

Movable Type and 1&1 Internet

Yesterday I tried to install Movable Type to my web server, and it failed miserably. In trying to get MT installed, I kept running into HTTP 500 errors even trying to run the installation, so getting beyond that was likely not going to happen.

It was only after a few hours of searching that I discovered the reason: 1&1 Internet (the company that hosts the web site for Colony West Software Company) restricts execution times on scripts to 10 seconds. I’m not entirely sure how accurate this is, however, because uploading files to the web site for distribution goes through a script.

At first I didn’t consider this to be an issue as I was able to install other content systems successfully without issue: Joomla, Wordpress, Xoops, and Serendipity to name a few. But when I saw the issue listed on Movable Type’s wiki, it made perfect sense. I considered an alternative – installing it to a Linux server I have in my home and then transferring the installation to the remote server – but I decided this would only provide a temporary gain given what I was reading on MTs web site.

I would like to switch to using Movable Type as it is more featured and has a better installation base than Wordpress – for example, Movable Type is the system that powers The Huffington Post. But unfortunately with this script execution time limit in place, that can’t happen.

Well that is if there is such a limit in place for my hosting package.

I’ve sent an e-mail to 1&1, so we’ll see what they say. I’ll be sure to post their feedback.

Published by Kenneth on 09 Sep 2009

Windows 7 Sins – Creation vs. Evolution meets the software industry

Recently the Free Software Foundation created a website called “Windows 7 Sins” in which it details seven “sins” that Microsoft is allegedly committing. I’m going to respond to them because they detail clearly how narrow-minded the FSF has become (or has always been).

Note: Their new site makes heavy use of yellow coloring in images and text blocks. This may make the site difficult on your eyes and/or cause eye strain or headaches.

Before they even get to their first “sin”, they already state their obvious bias by stating that Windows 7 is proprietary software, the “same problem that Vista, XP, and all previous versions have had”. Is this really an issue?

The Free Software Foundation seems to think that computer owners care if their operating system is proprietary or open source. Here’s a news flash: they don’t. And much to the FSF’s frustrations, that won’t change any time soon. Do they think that we’ll just become a world of software engineers? I highly, highly doubt it.

Most who shop around for software also don’t care if what they select is open source, otherwise there would’ve been thousands of calls for me to open source digestIT 2004 (guess what, there hasn’t been a single one, and the software has been downloaded hundreds of thousands to over a million times over the course of approaching 6 years).

Jack Wallen at Tech Republic recently commented on this web site as well, in which he, predictably, stated that he agrees with the Free Software Foundation’s claims, though he did correctly state that the majority of users couldn’t care less (he says “could care less”) whether they can share, modify, or study what they’re using. Someone should send a memo to the FSF saying the same thing.

But let’s look at each of the 7 claims individually, something that most religious FSF proponents won’t.

1. Poisoning education

Okay this argument is similar to the creation versus evolution debacle. Students aren’t presented with any other options than Microsoft, apparently, and the FSF is screaming like the Discovery Institute was screaming prior to and even after the famous Kitzmiller v. Dover decision in 2005.

In this instance, the Free Software Foundation is complaining about how Microsoft seems to hold a monopoly over public education like the theory of evolution holds a monopoly over public school biology classrooms. Tough luck.

The point to make here is that open source has never been a major part of primary or secondary academia. Apple used to hold the torch there, but Microsoft in the early 90s managed to take the torch away by doing 2 things in primary and secondary schools: lowering computing costs and preparing students for more real-world applications.

The reason Microsoft still dominates education is because they’ve been the primary provider of computing to public education for almost 20 years. You’re not going to break that hold without one hell of a fight trying to convince those making the decisions in public schools that the change is worth it.

And to that I say, “Good luck”.

2. Invading privacy

Windows Genuine Advantage was problematic, I will agree. But to say that it “inspects the contents of users’ hard drives” is absurd without evidence backing it up. And the Free Software Foundation so far has presented none.

Windows Genuine Advantage is an anti-piracy tool. Microsoft does have a right under the law to enforce their copyrights, and while I disagree with WGA, it doesn’t scan a person’s hard drive. Instead it is very limited in how it determines whether the copy of Windows or Office you are running is legitimate – which is why it also originally produced a lot of problems.

Microsoft is not invading anyone’s privacy. If anything they are trying to enforce their copyright, not steal information about those using their software. I welcome evidence that Microsoft is, in fact, stealing personal or demographic information without the knowledge or consent of their customers and user base.

3. Monopoly behavior

What the FSF claims used to be true. The monopoly that Microsoft still enjoys is one of brand familiarity. Many are familiar with Microsoft, Office, and Windows, so they stick with it, even when presented with other options (like Macs, for example).

They also say that “even computers available with other operating systems…often had Windows on them first”. Generic statement with no evidence. Jack Wallen is right: the FSF is slipping here.

4. Lock-in

“Microsoft regularly attempts to force updates on its users, by removing support for older versions of Windows and Office, and by inflating hardware requirements.”

Hate to say this, but this isn’t Microsoft driving this. It’s consumers. Consumers are always wanting more features, and the bigger and badder, the better. And show me a company in their right mind who still actively supports software they put out years ago. I’m one of few – I still provide support for MD5 for Win32, which was released in 2002. And Microsoft is trying to sunset support for XP, which was first released in 2001.

Even Sun Microsystems removes support for older products (Java 5 goes EOL at the end of October), as do Linux vendors.

5. Abusing standards

I don’t buy the arguments they present. They say that Microsoft has also bribed officials – evidence please? Who was bribed? There is the suggestion that Microsoft bribed officials in Nigeria (not that it’s a difficult task), and from what I can see it is Linux vendors making the claim.

But on top of this, Microsoft is free to dictate what standards their software will or will not support. One thing we can honestly say is that certain standards Microsoft has little choice but to follow, such as the many standards that are in wide use on the Internet. But when it comes to data storage formats, open source vendors are just as bad as Microsoft.

Look at GnuCash. Sure it can import data in several different formats common in financial software, but as for export… your options are limited.

6. Enforcing DRM

The FSF calls it digital restrictions management instead of its real name, digital rights management. Here they are talking about access to media on the Internet. The FSF also incorrectly states that users have the “right” to record what they see online. Not always the case, and access agreements on web sites dictate what you can and cannot do while there.

But with Windows Media Player, Microsoft added support for DRM to ensure wide availability. If they didn’t support it, but say Apple’s QuickTime player did, Microsoft would lose out big time. It was a strategic move that Microsoft made to help maintain their market share if not take more of it.

7. Threatening user security

Yes Windows has security issues. Guess what? Linux ain’t immune. But part of the security issues with Windows is that Microsoft was catering to usability instead of security. A balance is needed, but Microsoft originally wasn’t willing to make the sacrifices needed for fear of user complaints – like on the order of what they received with UAC in Vista.

And Microsoft’s software became a big target for hackers purely because of market share. If you’re a hacker looking to steal personal information, you don’t cast your malware into a stream where few fish seem to be biting. Oh no, you cast your malware into the largest pond with the most fish, and that pond is Windows.

Concluding…

It is really becoming obvious that the Free Software Foundation is becoming nothing more than a bunch of whining babies. Microsoft got into academia early before Linux was even a thought in Linus Torvalds’ head, and the FSF is upset that they’re not getting their turn. I mean, Apple had their turn, Microsoft has had theirs for about 20 years, and now the FSF feels that it’s their turn but Microsoft isn’t relenting and bowing down like the FSF seems to think should happen.

The Free Software Foundation, hate to say it, is less about open source than most other open source vendors. The FSF is about one thing and one thing only: GNU. Not Linux, not anything else, only the GNU project. They feel it’s superior to Microsoft’s offerings, yet they haven’t been able to gain market share like RMS probably thought would magically happen, and they’re fuming about it.

So instead of actually trying to compete in the same arenas as Microsoft, they’re pulling the same punches that creationist organizations tried to pull with education: get a few people in there to try to form a “resistance” and see what happens. And when the majority shout back, scream persecution, demonize the majority, and hope that helps when you cannot compete on merit.

And hate to say this, but Jack Wallen has kind of become the Kent Hovind of the open source movement.

Published by Kenneth on 09 Sep 2009

Open source hardware? Not quite…

Article: “Open source camera could pave the way for open source hardware

Another article by Jack Wallen. Let’s see, is he full of himself again, or is he actually saying something coherent and thought out? Sorry to disappoint, folks, but he’s full of himself again.

The problem is how over the top he takes this new idea, so let’s start with the idea, which, in actuality, isn’t anything new.

According to Science Daily, at Stanford University, some photography students have created a camera with a firmware that they are releasing as open source. The idea of open sourcing firmware isn’t new, but the application to photography equipment is, and could prove to be rather interesting. I am certainly interested in how far this could go.

But let’s get back to Jack’s response to this. He goes very over the top here. Perhaps Jack’s brain is operating at Internet pace, because he’s not slowing down to think.

And how is it he’s not slowing down to think? He talks about the Android operating system for mobile phones as if it’s something that hasn’t come to fruition yet:

Phone developers releases next smart phone as open source and open source developers go crazy making apps to outshine iPhone app store.

But from here, his lack of thinking goes even further, just not in order:

Auto maker creates open source car and some hobbiest (sic) discovers a means to double the gas mileage.

Given that Jack is a stylist with a degree in theatre, I won’t fault him for not thinking on this one. So here’s a little science lesson: you can only get so much energy out of combusting gasoline, and we can only optimize the internal combustion engine so much. And there are laws of physics that say that it takes so much energy to move a particular object of  certain mass at such a speed for a certain distance. This is why there are concerns that the only way to improve gas mileage further than what we’ve done, aside from substituting every car for hybrids or making them so aerodynamic they’re useless for hauling cargo, is to cut weight.

So it’s highly, highly unlikely that a hobbyist will discover a way to double the gas mileage of a vehicle by modifying software. Sorry Jack, but just some more wishful thinking. But it, unfortunately, doesn’t stop here. His next statement is absolutely absurd.

Cancer center releases their current drug research under the GPL and retired chemist discovers cure for cancer.

Here he puts his ignorance out there for everyone to see. But given that creationism still consumes the US, I can probably assume that Jack doesn’t understand the scientific process either.

Science has been nothing but open for centuries. So if a cure for cancer is going to be discovered, it’ll happen in the open and very peer-reviewed world of science. And if someone claims to have a “cure for cancer”, you can imagine it will be hotly debated for years, if not decades, to ensure the claim has merit.

Some “retired chemist” isn’t going to take a cancer center’s drug research and turn it around into a cancer cure.

And here’s a question leading from this quote:

I want:

  1. To be able to go to a site.
  2. Search through a listing of firmware for my hardware that matches my exact needs.
  3. Download that firmware.
  4. Install that firmware.
  5. Use my hardware in the exact why I want to.

Okay you want to be able to find “firmware for [your] hardware that matches [your] exact needs”?

Since you keep boasting about open source and that users can become developers and “rework it so it’s exactly the application you need”, how about learning to do that yourself? You talk on and on about software development, implicitly proclaiming yourself to be an expert with regard to open source, when you’re nothing more than a bastion of assumptions that haven’t seen a drop of reality since RedHat 4.2 was released.

If you want the exact software you need, practice what you preach and become a software engineer. Then you’ll get that dose of reality you so obviously need.

I want my hardware to have an “app store” so I could just download new functions and features instantly.

Will you be contributing to said “app store”?

Others have pointed out where you’ve gone wrong in your statements, yet you’re still spouting the same stuff. You do not understand software development, you’ve never been involved in software development according to every profile I’ve seen about you on the Internet, and yet you keep spouting off like you’re an expert.

Now I could be wrong and you could be a great software engineer. So if you’ve tailored all your software to suit your needs (something I highly doubt), then by all means publish the source code for all of us to see your brilliance, or lack thereof.

Oh and one last thing: separate in your mind the difference between open source and an open design. You can’t “open source” a car, but you can openly publish the designs and engineering drawings. You can also openly publish the hardware designs to, say, a mobile phone, but you open source the firmware.

But in a way your car is kind of “open source”. If you want to dissect your car to study how it works, be my guest.

Published by Kenneth on 07 Sep 2009

Release management

Every software developer and engineer who has released software onto the Internet has gone through some kind of release management cycle, whether formalized or not. Previously at Colony West, our release management was entirely informal – actually more or less disorganized.

For the longest time, I didn’t use a source control system. About four years ago I started using Subversion, and it’s companion Windows program Tortoise, and I won’t part with it for anything else. I love using it, and after using it, I can no longer imagine not using it, and I couldn’t see how I was able to actually develop software without it.

Release management obviously involves the source control system, so what does the release cycle here at Colony West involve? I’ll use the release cycle for the Trade Profiteer as the example of what I do. Now how I release software may differ from what you require, and will likely vary between projects.

New Release Branch

The first step in the release cycle is to create a new release branch in the repository. This step may or may not be optional. That will depend on the project itself. During the Trade Profiteer’s beta release cycle, this step was considered optional, but became mandatory with the first release. Keeping a separate release branch allows you to target fixes for that particular version, while also making sure to keep any fixes updated in the main source trunk.

Release test builds

When the decision is made to create a new release, final test builds are made to ensure the current source state will build without errors. Once this is verified, scripts are executed to update version numbers within various files in the source tree with a test build executed to ensure everything still builds clean.

A test installer is also built at this time. The installer is tested on multiple operating systems running through virtual machines as both upgrade installations and clean installations. Any installation issues are corrected and changes made to correct issues checked into the repository.

If the installer runs clean on all supported operating systems, the changed files containing the new version numbers are checked into the repository.

Tagging the release

Anyone who has used a source control system should be familiar with tagging. Subversion makes revision tagging easy. For those not familiar with tagging, I highly recommend reading the section in the Subversion manual on branching and tagging.

After everything is checked in, the new revision is tagged within the repository with the release version number (major, minor, and build) as the tag name. This tag ensures that the source is readily accessible should I need it down the road to troubleshoot a reported issue.

Release build

After everything is tagged, the tag is checked out of the repository into a new folder for a release build. Final release builds are performed for the executable and installer and the installer may be packaged into a compressed file.

Preparing the web site

There are multiple steps here. Obviously the first step is uploading the packaged installer to the web site and making it available in the download repository. That part is easy. Once uploaded to the web site, it is available immediately.

Along with uploading the file to the repository, other files are shifted around. Previous releases are moved to their appropriate folders on the repository so they are not immediately visible, but still available if anyone is interested.

The Colony West web site contains a version management system that is used to track version numbers easily. The Trade Profiteer queries this system when it checks for a new version, so we add a new entry into this system to reflect the new version. When the Trade Profiteer queries the system, it will see the new version and will alert the user to the new release.

With everything uploaded to and entered into the web site, the release cycle is now formally complete. However there are still a couple informal details left.

Publishing details

Informally, now we need to alert other communities to the new version’s availability as well as providing details as to what has changed in the new version. This occurs through this blog, which will always be the primary source of news information for this company, but also through other venues. For example, with the Trade Profiteer, I will also post updates to the Puzzle Pirates forum.

Concluding…

Well that’s pretty much what goes on right now. Since the Trade Profiteer is a small, independent application, this way of managing our release cycle is reasonable. As the Trade Profiteer grows in complexity, or more software is released, the release process may need to be modified, but for now things are working well.

One thing that will certainly complicate the release process will be the upcoming addition of globalization. There are two international oceans on Puzzle Pirates. As those oceans were designed to be populated and navigated by native German and Spanish speakers, though an English interface option is available, it doesn’t seem fair having support for Opal and Jade, respectively, without having a native German and Spanish interface.

Including globalization will complicate the release process a little, but it will likely more complicate the testing… We shall see.

If you have any questions on the release process here at Colony West, feel free to drop me a line and I’ll be happy to clarify anything.

Published by Kenneth on 16 Aug 2009

A non-developer makes assumptions about software development

Just found another blog post by TechRepublic’s open source and Linux advocate and assumption king Jack Wallen.

Article: “Five reasons why your company should hire open source developers

Given some of the things that Jack says in this blog post, you’d think he’s become RMS 2.0… Before going into the five reasons, he makes some very bold, and incorrect statements.

Many larger companies do not place any value on open source applications, therefore they do not place any value in those who code the applications.

This is absurd. Any software company looking to save money will look at the open source community, so they will place a lot of value in open source applications and libraries as they are now apart of the product they are making available. And by placing value in the code employed, they will also place value in the developers who wrote it. But value also comes from quality, and open source does not automatically mean greater quality.

Some companies are afraid that hiring an open source developer would be a liability – possibly reverse engineering their proprietary software and then releasing forked versions into the community.

Again, absurd. He does not back this statement up at all, either. Which companies are afraid of hiring open source developers? If you’re gonna say something, be ready to back it up.

Any person hired as a software developer or engineer at a company will also have access to source code, so there will be no need to reverse engineer anything. I have full access to quite a bit of source code with the company with whom I am currently employed. Now can I take it an release a forked version into the community? Absolutely not.

If I tried to do this, my employer would not only fire me, they would sue me so hard that I’d be living on breadcrumbs and water drops till I die. I would be homeless in a heartbeat with virtually nothing to my name, wandering the streets. My employment agreement and applicable copyright and trade secret laws ensure this.

Furthermore, I would never be hired as a software engineer again. Ever. The lawsuit would ensure all details were public, meaning my reputation as an engineer would be irreparably tarnished.

Now has this ever happened? It probably has. Developers have probably stolen source code from their employers in the past. But I highly doubt it is as large a fear or problem as Jack seems to imply.

After making these two baseless assumptions, he makes his reasons. Let’s look at each of them.

You can see more than their resumes. Because the applications they work on are open, you can get a first-hand look at the code they write even before you do that first interview. Try to do that with a developer for a proprietary software developer. This will give you a fairly instant grasp of your interviewee’s understanding of programming. You will know right away how well they write their code, if they use comments well, what tools they use, etc.

This one is more common sense than a benefit. Any person who contributes to open source projects should have that on their resume. Anyone who writes their own should definitely have it on their resume. But he overlooks something here.

When you submit a contribution to an open source project, as opposed to writing your own, your contribution is “massaged” into the rest of the code. What I mean is that the contribution may be changed or adapted to conform to any published or implied coding standards and practices behind the project. There will very likely be comments in the code denoting your contribution, but it won’t necessarily be an indicator to the kind of code quality an employer can expect from you.

Now certainly anyone looking to be a professional developer (namely college students) should get involved in an open source project. I recommend it to the interns at work, and I’ve recommended it to others as well. It’s a great way to gain some instant experience you can put on your resume.

His second point is just full of assumptions and obvious bias:

Open source developers have had to think on their toes and patch the programs that Microsoft has (often times) intentionally broken. Think about the Samba team. For the longest time they would take a step forward and Microsoft would change something that would push them a couple of steps back. The Samba team had to be on their toes all the time to make changes so their software would continue to work with the latest version of Windows.

He shows is obvious and well-published bias against Microsoft here, and makes a lot of assumptions. First, he’s implying that open source developers (aka, the community) are fixing Microsoft’s problems. This isn’t the case. Microsoft’s software is mostly proprietary, so there isn’t any way this can happen.

But he shows how little he understands the “big picture” with his mention of Samba.

Samba is an open source service for Unix/Linux operating systems to allow them to masquerade as Windows computers on a network. I use it on my media server, which runs openSUSE 11.1. To accomplish this, Samba implements Microsoft’s Server Message Block (SMB) protocol according to published specifications.

The protocol was not created by Microsoft, but Microsoft uses their own derivation of it on Windows. But given Jack’s statement, one would think that this specification changes often, so the Samba team is constantly changing the software to adapt to Microsoft. This isn’t the case.

If you look at the Samba Bugzilla, you will notice that there are over 1,300 bugs still open, as of August 16, 2009, and they are making maintenance releases on a somewhat regular basis to correct bugs. Are they trying to stay up to date with Microsoft? I doubt it.

Jack, do your research.

On to his third point, which is again laden with assumptions:

Although this is not a universal truth, open source developers are very passionate about what they do. They have to be, otherwise why would they do it? If you hire an open source developer that has a passion for their work on open source projects, it might very well spill over into the work they do for you. Now I understand that many developers are passionate about their work (I’ve read Microserfs ;-) ), but passion in the open source community runs a bit hotter than it does in the non-open source communities.

He’s right that it isn’t a universal truth, so why is he even stating it? Both open and closed-source developers can be passionate about what they do. It depends on the passion.

Writing software is about more than coding something. It’s about helping people. Both open and closed source developers can be passionate about helping people. To say passion runs deeper in the open source community is to draw a baseless assumption clouded in bias and ignorance.

I write proprietary software, both independently and for hire, and I’m likely more passionate than many open source developers I’ve met. Are there others more passionate than me? Certainly.

Along with an open source developer you will enjoy open source support. This is a tricky one for sure. You can’t hire a developer and then expect that developer not only to code but also serve as support for end users. But it is always nice when there is someone there to help support the IT department. That Apache server that someone installed a long time ago and has been running non-stop without upgrades because everyone is afraid to touch it? It could be given the attention it so deserves now.

Developers are partially responsible for helping the support department. After all who knows how a particular feature works better than the person who wrote it? At my previous job with MediNotes, I was being contacted by the support department on a regular basis with questions or concerns.

If a company hires a contributing developer for an open source project partially because they use that project internally, then expects that developer to become the point of contact for support, they will be disappointed. Being a subject matter expert is one thing, but what Jack thinks will happen likely won’t.

Plus if an organization sets up an Apache server but only one person is trained on how to maintain it, then that is that organization’s fault, and they should’ve brought someone in to take care of the server when that need was identified.

And like adopting any open source project, you will save money. Along with hiring a single open source developer, you now have the “support” of the entire open source community, should you need it. If you are working on an-in house project that ends up going to open source that project has the opportunity to scale in proportion to the size of the community supporting said project. If that project catches the eye of the open source community, who knows, it may wind up being the next Samba or Apache.

What? Jack, you are clearly not thinking. How is that one person bringing with them the support of the entire open source community?

Your statement about an in-house project going open source is also completely unrelated to the rest of the article. And for such a project to become the next Samba or Apache, it needs to have a similarly large user-base. Plus there is one key difference between Samba and Apache: Apache is available for multiple platforms, not just Unix/Linux.

In his article, he also includes a poll question: “Are open source developers good for the company?” As of 339 votes, mine included, 58% answered “Yes”, 25% answered “Depends on the developer”, 11% answered “Depends on the company”, and 5% answered “No”. My vote was “Depends on the developer”, as an open source developer who is unskilled is not better than a skilled proprietary developer. Some direct evidence to one’s skill level is just more readily available.

And now his closing paragraph:

I don’t want anyone to get the impression that I think open source developers are better than closed source developers. But they do have different ideologies and they do go about things differently. For a long time companies avoided hiring open source developers for one reason or another, but I have always and will always stand by my claim that open source developers make great additions to your IT staff.

Too late, Jack. It’s clear you think that open source developers are better. They might have differing ideologies, and it is those ideologies that should be queried by anyone conducting an interview. To say that “for a long time companies avoided hiring open source developers” is absurd, and another claim for which I doubt evidence can be provided.

To say that open source developers make great additions to an IT staff is also a little short-sighted. How well of an addition a developer will make will depend on the developer as a whole, not just whether he/she happens to also be an open source developer.

Jack, you need to stop making comments about software development. Your profiles on TechRepublic and LinkedIn say nothing about you being involved in the development of any software, so you’re just spouting assumptions that have little to no basis in reality and that are derived from other assumptions you’ve made.

Published by Kenneth on 31 Jul 2009

Software safety

This afternoon a user on the Puzzle Pirates forum posted an interesting question:

Im [sic] sorry to ask this question, but how do we know this software is safe from account hackings and such?

It is certainly an interesting question, and while my response likely won’t quell concerns much, it did get me thinking about safety in regards to software. How do you know that every piece of software running on your computer is safe? Even if you examine the source code, is there a way to know for sure? In actuality, not really.

Now I know that there likely will be some open source advocates who will say that open source software is safer than closed-source software, statistics don’t really support that. Any application on your computer can be used as a conduit for compromising your computer’s security.

In my response to this concern, there wasn’t much I could say, but I finished with this statement:

The Trade Profiteer will not harvest any information about you and, to the best of my knowledge as the sole and principal developer of this application, cannot be used to compromise someone’s account.

About all I could provide are verbal assurances. And this was not the first time this concern was raised. Not long after the initial beta release and the announcement on the forum, forum user hugnam posted this:

Sounds pretty good for an application, but unless it’s proven to be safe, i’m not gonna use it.

Software safety is certainly an issue. Any data you enter into any application has the potential to be harvested and sent somewhere. An application may hook into the system to capture keystrokes. Unless you have an application to detect applications like these, you can’t know entirely.

To help quell fears a little, I responded to the most recent concern on the forum with this:

If you are concerned that my application might harvest data about you, your computer, or your installation of Y!PP, you can certainly configure your firewall software to block the Trade Profiteer from accessing the Internet. Unlike with the Pirate Commodity Trader with Bleach, you will not be interfering with the Trade Profiteer by doing that. If you configure your firewall to block the Trade Profiteer, the only functionality you will be blocking is its ability to check my web site for a new version, which is the only reason it will ever connect to the Internet.

Further, I added this, which may give a little more assurance:

I have also been playing Puzzle Pirates since late 2005, I’ve built up considerable wealth in the game, and I’m not about to risk all of that.

Creating an application that can be used to harvest account information is an offense that will get you banned from Puzzle Pirates and likely reported to applicable government agencies. It’s not something I’m going to risk.

Published by Kenneth on 26 Jul 2009

Is a landline necessary?

News Article: “How to Cut the Beastly Cost of Digital Services

Right now I live in the Kansas City metro area. My internet services is currently through Time Warner Cable, and I love the service we receive. It’s much better than the DSL service I had previously through Qwest while living in the Des Moines metro area.

While with Qwest, I was paying about $43/month for a 7Mb down/896K up DSL service bundled with DirecTV with DVR and a home phone line. The total package came to about $130/month, plus I had a family cell phone plan covering my phone and my fiancée’s phone.

Since moving to Kansas City, we have Internet through Time Warner, we don’t have any advanced television service (cable or satellite), and we don’t have a landline. We only have our cell phones (through AT&T wireless). One thing that worried me with only having our cell phones was 911 service – as anyone should be.

But after securing an apartment in Kansas City and starting some essential services, like power, I held off on starting landline service because a basic landline couldn’t be ordered online through AT&T’s web site. I would have to stop into an AT&T store or call in to start service. So I held off. We were still in the middle of a move, I had my cell phone, so I wasn’t worried.

I’ve had to call 911 from my cell phone before. On December 2, 2005, I was involved in a minor traffic accident in downtown Des Moines, and I was the one who summoned police. No injuries in that accident – well just injuries to wallets…

But the one event that told me that going with a landline was likely not going to be necessary occurred during the early morning hours of March 22, 2009, on Interstate 35 southbound toward Kansas City outside Cameron, Missouri. I witnessed a car roll over into the median and stopped to offer assistance. My fiancée pulled out her cell phone and summoned emergency services.

Now being on a cellular phone, 911 might be a little flaky depending on where you are. AT&T’s Terms of Service includes a disclaimer that connection to a 911 service in a timely manner cannot be guaranteed. My parents live in the country, so I would not be trusting 911 on my cell phone out there, let alone trusting my cell phone at all.

Being on the Interstate, if 911 could not be connected through anyone’s service, someone would have to drive into town and find a phone. Since we were only a mile out from a reasonable size city, not connecting to 911 would’ve definitely said something about AT&T’s service.

But since my fiancée was able to connect through to 911, and sirens were within hearing distance within 5 minutes later, lights not long after that, my only reservation about not going with a landline was settled and I opted against the landline.

So unless you have problems connecting to 911 on your cell phone – your provider can probably help you assess this without you potentially breaking the law by dialing 911 “just to check” – you can probably do without a landline. And for those of you on pre-paid plans, calls to 911 do not use your minutes.

Published by Kenneth on 24 May 2009

People just don’t understand at times…

Every once in a while an article on Yahoo! Tech catches my attention. This time, it’s an article about a cellular customer who incurred a $62,000 cell phone bill. Now we’ve heard of cell phone bills in the past reaching into the thousands of dollars – typically because of texting plans where usage ended up being much, much higher than the allotment. A father in Cheyenne, WY, smashed his daughter’s cell phone after she incurred a huge bill for texting her friends like mad.

But the case of the $62,000 cell phone is one of entertainment and international roaming. According to Yahoo! Tech, quoting a CNN report, the customer downloaded a copy of the movie Wall-E, about a 1 GB download, across his wireless data card while in Mexico. Having a wireless data card is nice, but one thing most don’t realize (because they don’t read the fine print) is that you are capped at a maximum download usage of 5 GB for each billable month.

Plus use your cell phone for anything internationally and you will incur additional fees. Why is this? It’s the same concept as roaming: you are using someone else’s network and not the network of your cellular provider. When you do this, your provider incurs charges from the network you “borrowed”, along with expenses for patching through the call you made. When you borrow a cellular network in another country, let alone another continent, things can get expensive fast, for you and your provider.

And one concept that should be very familiar to everyone is that all costs incurred by a business are eventually passed on to their customers.

Now in today’s world where you can get an unlimited family text plan for a reasonable price (I’m paying $25/month through AT&T) and unlimited data plans as well, with unlimited call plans coming down the pipe (they’re still prohibitively expensive, in my opinion), many of the commenters to the Yahoo! Tech post were complaining about corporate greed. One commenter mentioned that because most do not use a majority of their talk time, something that, arguably, cell phone providers bank on, that there is no reason for cell phone providers to not lower their prices.

An argument similar to this has been used in regard to the oil and pharmaceutical industries without regard to what those companies do with that money (hint: they don’t line their pockets with it). So what do these companies do with their profits? Simple, they invest it.

In the case of cellular companies, the investment has allowed for significant upgrades to cellular systems throughout the United States and abroad. While there have been significant advances in wireless phone technologies, the cellular systems providers are responsible for actually providing the ability for those phones to work, and that requires periodic technology upgrades, which requires capital, which tends to come from previous years’ profits that have been sitting in a bank somewhere.

One commenter called for the Federal government to institute caps to overage charges. The reasoning behind this is that the cellular providers, who are still subordinate to the FCC in the United States, must license bandwidth from the Federal government, so the FCC should institute a rule providing caps on overage charges. Ah, just what we need, more government intervention…

The cellular plans existing today are proof positive of the effectiveness of free markets. Now I’m not talking your standard contracts where you walk into an AT&T store, pick up an iPhone, and set up your plan. Actually what helped bring cellular plan rates under control was the introduction of pre-paid plans.

The first pre-paid plans I recall seeing were offered by MCI in 1998 or 1999. The phones were older model phones, but they still did the job, and you paid for the time you used – but you still had to purchase a certain minimum over a certain period to maintain the phone number, much like every other pre-paid plan today. Stop using the phone and you lose the number, and the minutes you’ve bought.

Today, arguably, the most prominent cellular pre-paid service is Tracfone. But did you know that Tracfone piggy-backs on the other cellular networks? My mother has Tracfone and her phone number is provided by US Cellular. AT&T offers a couple options, and there’s also Net10, among others. When pre-paid plans started becoming more popular, the major providers started offering more to bring back customers they lost, including reducing prices.

The problem with the plans, though, is that they are inconvenient. You have to remember to buy a card, then you have to go through the steps of redeeming the card with your provider, then (depending on the provider) you enter codes into the phone to actually have the minutes available. If this is no longer the case (I know it once was with Tracfone), I’d like to hear some feedback as to how it is now.

Tracfone, several years ago, offered a monthly automatic payment plan, which took away the need to remember to buy a card, but you still had codes to enter into the phone. And it only gave you so many minutes as well, so you were still limited or you still had to buy more, which meant buying a card.

But given all the complaints about the big providers, few seem to remember what cellular plans used to be like. The inadequate supply of plan minutes, meaning the constant risk of overages, especially once texting and mobile web were first introduced and started gaining popularity. Complain all you want, but things are improving. And while it seems that all of these companies are taking in excessive profits, it is what they do with the money that is more important, but most don’t look beyond the numbers.

And it is only through reinvestment of profits that allows things to improve. Price caps will reduce profits, which will in turn reduce the amount of investment can be made. Reduce the investment and you won’t see expansions or improvements to existing cellular networks as quickly.

Plus bear in mind that it is due to new innovations that what was previously expensive no longer is, and what is expensive now may not be in the future. And don’t expect anything regarding that to change at Internet-pace.