Definition of a Computer Scientist

What is a computer scientist?  Google gives the definition: a scientist who specializes in the theory of computation and the design of computers.  In most of our minds, a computer scientist is a person that builds computer systems.  More recognizable names (at least in terms of what a person with a computer science degree would be hired for) for a computer scientist would be developer, coder, software engineer, or software architect.  A lot of people will argue with me that computer scientist is more than just a coder, and I would fully agree.  Although, when a person wants to be a software developer, they are forced most of the time into a computer science program, as it's the best way for they to obtain the skills required to build large software systems.

But the definition is besides the point.  The real point of this post is to comment on a conversation retold to me by a colleague who's taking a senior-level college course.  To put the conversation into perspective, I need to describe the course (and for the record, I have taken this course myself several years ago).  This course is a project management course for wannabe computer scientists.  In the first semester of the class, students are required to come up with "problems" and probable solutions to them.  The problem and solution are then pitched to a panel of industry professionals.  The solution is actually implemented within the second semester.  As you might think, developing a problem from within thin air is quite daunting.  Eventually though, students discover a problem that obtains approval from the professor.

This conversation spawned due to my colleague's group not being able to find a problem.  As far as I was told, several problems were proposed and spot down.  When a particular idea was shot down, the professor said, "Your job as computer scientists is to  innovate."

This statement is just wrong.  Your job as a computer scientist is not to innovate!  Your job as a computer scientist is to SOLVE PROBLEMS.  Now, it is possible for a problem to be solved in an innovative manner, but you should never innovate for the sake of innovation.  In the real world (definition: the world outside of academia), you will be given problems to solve, and you will be expected to solve them.  Furthermore, you will be given your problems, not expected to discover them on your own.  Creating your own problems to solve would make you an entrepreneur, not a computer scientist (which you would be when you actually solved the problem).

For example, let's say I have a glass window and at 5:00 everyday the sun shines through, displaying a terrible glare on my computer screen.  An innovative solution to the problem would be to design a microfilm composite that reduced the glare depending on the amount of sunlight shining through.  Another solution would be to buy a set of blinds.  Not "innovative" by any means, but it solved the problem.

What advice do I have for the group of computer science student trying to get through this project?  Don't try to come up with a brand new problem.  That's impractical, or the solution to the problem is to high of a level for you to understand as an undergraduate.  Instead, try to take an existing problem with bad solution.  I guess you could say you should take a bad solution and innovate?  Ideally, you just want to get through the semester.  When you enter the "real world", the problems will come to you.  At least these problems will be paid, and you won't have to worry while worrying about an English midterm, linear algebra homework, and your seventeenth algorithms assignment.

Finally, take everything your professor says with a grain of salt.  They probably haven't seen the real world in a long time.

comments powered by Disqus