Fullscreen
Loading...
 
[Show/Hide Left Column]
[Show/Hide Right Column]

Blog: Csaba Toth's software developing experiences
Description: Interesting findings from various areas of software engineering
Created by Csaba Toth on 2010 Mar. 14, Sun 01:38 CET
Last post 2011 July 01, Fri 03:03 CEST
(6 Posts | 10112 Visits | Activity=4,00)

Since February I try to be a friend of:

- JAVA2EE

- EJB

- Eclipse

- JBoss (with JDK6 !!!)

- JBoss Tools Eclipse plugin

(- Maven and m2eclipse Eclipse plugin)

And I'm struggling with them hard, opening issues in the corresponding project's JIRAs (and getting acknowledgement, that yes, this is a known problem), having 20 Firefox tabs open to try to find a solution for exceptions, but not finding exactly the same setup as mine (for example it's not JBoss but instead WebSphere or GlassFish AS), copying jar files to endorsed, cleaning up Eclipse to avoid crash, and ... I won't continue.

And people say to me: "yes, the learning curve is very hard/shallow".

Not neccessarily. I already knew these technologies one-by-one. They just don't work flawlessly together (let's say this way).

So I realized that the learning curve of EJB for example is not hard, but there's a curve which is very hard when putting all of these technologies together:

The sucking curve.

Published by Csaba Toth on 2010 Mar. 27, Sat tocsa
Print

You know, there's the time when you have to estimate a project's needed time. It's often very hard. Especially if a software has a 20 years of history, it can easily happen, that when you dig down to the bottom of the code and open the cupboard, some skeletons will fall out of it. And the cleaning up of the mess will require an extra month!

So when you estimate, you can say 30 mandays, 3 manmonths. But in a really _huge_ software, there can be such requirements, which would require not only a some manyears (!) but more. I realized, that some projects would claim ... a MANLIFE!

A manlife projects requires a whole life and soul of a person to be able to complete. Or half whole life of two colleagues. These are serious projects, which usually won't start because of lack of resource, the limitations will continue to live deep in the code, more layers of sediments deposit on top of them from year-to-year, and somtimes can cause other projects to turn into manyear projects.

Published by Csaba Toth on 2010 Mar. 27, Sat tocsa
Print

Back in Hungary I usually attended to one-day Microsoft conferences, which wrapped up some topic, for e.g.: ASP.NET development, AJAX development and mashups, new features of upcoming software. Many times the material was prepared and presented by students from BUTE (Budapest University of Technology) AUT department, which had a good connection with Microsoft. I'm not a Microsoft only guy, throughout the years I attended several one-day SUN conferences, TeX conferences, PHP conferences.

(One time I even attended to a full day hands-on lab on Microsoft Windows Mobile, it was given by professionals from Germany. One of the greatest conferences was a two-day wrap up of the MIX 07 conference in Budapest for middle Europe, it was called REMIX 07 and it was held in beautiful Museum, and was followed by a ship cruise party on the Danube at night. Sandor Nacsa was the driving force of REMIX and the one-day conferences in Lurdy Mall.)

I intended to continue this tradition (that I always try to widen my perspective and keep myself up-to-date with new technologies) at my current living place also. Let me list some very good conferences and events in Tennessee, close to the Nashville area:

  • devLink: this is a 3 day long conference around Nashville during the summer
  • CodeStock: 2 day long conference in Knoxville during the summer

Both of these events feature very experienced speakers, and you can find plenty of topics too which are not Microsoft related!

Besides that I regularly attend to the Nashville .NET Users Group and the Nashville Web Developers Group monthly meetings. Usually topics are new to me and interesting.

There's also an SQL Users Group in Nashville, I have attended only one time yet. And now I discovered an event series called SQL Saturday, so I'll visit Birmingham, AL this month. Because my new job is heavily connected to databases and data mining, I think I can refresh my knowledge about database design, and I can also pick up best practices, experiences.

If possible I also join Microsoft's MSDN Unleashed and ArcReady event series in Microsoft's Franklin HQ (Microsoft has talented guys, like Brian H. Prince).

Why do I attend to these conferences? Does it worth?

I think pretty much! First of all most of these are totally free or very low cost. Secondly, not only the topics are important (I believe that someone can indeed pick up some knowledge if he/she is eager enough), but someone can hear very interesting things when we just chat with each other. I meet with lots of people and make friendships.

I also have a theory that in our busy rushing life a software developer often just doesn't have any time to learn by himself. So devoting time explicitely to these events I reserve resoruces (from myself) for them and hence I have at least some oportunity to learn new / enhance myself and get out of the everydays monotonity for a short time.

I mention concrete benefits. For example from January 2009 to August 2009 I attended to a C# .NET + ASP.NET developer course. We had to create a news portal at the end of the course to present our skills. Apparently I've attended to Michael C. Neel's presentation about Lucene at the 2009 devLink conference (BTW, Michael C. Neel is the organizer of CodeStock conference). It was very fascinating, Lucene seemed to be a very good tool for ipmlementing full text search support for websites. And guess what: one of the features our needed news portal was full text search. I used Lucene, and I was very pleased, thank you vinull!

You can sometimes experience podcasts live shows as post conference events, previous year I managed to attend to Deep Fried Bytes and .NET Rocks talkshows.

 

Published by Csaba Toth on 2010 Mar. 16, Tue tocsa
Print

I have been a C/C++ CAD software developer for more than 9 years (architectural CAD software developer for 7 years and modeler CAD software developer for 2 years after that). My new job involves Java technology. My IDE of choice was Eclipse, although I tried NetBeans and it is also a great tool.

It really helps if you increase the Java Virtual Machine memory usage related arguments in the eclipse.ini file. But I want to tell you what can extremely slow down Eclipse: the Virus Scanner. Why? Because in Java technology jar files (and ear files, war files, etc) are all actually PKZIP compressed data. This is very good for saving disk space. For the same reason, plus the ability to cloak an evil code a little made malware writers to often deliver their bad payload in compressed form also. Because of that Virus Scanners peek into the file, see the PKZIP magic number ("PKxy"), and they uncompress them and scan their content. This behavior can slow down Eclipse so much that it can become unusable. The only thing we can do against that (besides using Linux) is to find the option of "scanning compressed files" in your virus scanner and turn it off. It is important to understand that with this we decrease the defense of our system. Also important to note that turn off the scanning only for packed files.

Published by Csaba Toth on 2010 Mar. 14, Sun tocsa
Print

The main message of my previous post was that the buffering of the stdout can cause problems if we want to receive it from another process. Previous week (middle of March, 2010) my colleague was trying to use R (for statistical, analytical purposes) starting it as a command line tool and receiving data through the command line. He didn't get back data, but I encouraged him to keep on trying and checking out really the situation of the stdout.

What happened is that the data is there, but it got lost in the "pipes". The command line he used involved more outputs from stdouts, and the data finally gone. He could come up with a solution.

Published by Csaba Toth on 2010 Mar. 14, Sun tocsa
Print

There's a project I maintain in my free time. There is a high speed, number crunching, command line core written in C/C++. This core reads it's configuration from an xml file. The input and output files are specified in that xml, and the program gives simple feedback about the ongoing computation on the stdout:

  • gives error message if needed
  • displays the opened/read/written files
  • displays an ASCII progress bar consisting of star (*) characters

Although the configuration thorough xml is good, but the scientist users of the tool needed a GUI for it, it is indeed user friendlier to tune the parameters on a GUI than fiddling with an xml.

I was sure, that I would keep the computation core as a stand-alone unit, as it is right now. So the GUI would be really just a user interface. But the question emerges: how will the two programs communicate with each other. There are several types of different IPC communication methods, but the easiest thing is to use the one which is already exists: the standard output.
The GUI needed really just some feedback, so I placed read-only a multi-line editbox onto the Form, where I would feed the stdout of the command line utility. Currently I'm on Windows platform, the computation core project is a Microsoft Visual Studio 2008 C/C++ solution, while the GUI is also an MSVC 2008 solution, but I use C# and Windows Forms.

I also decided that if I went to this straight forward stdout communication way, I still needed to give some feedback on the progress of the computation. Because the command line tool normally displayed only 10 stars, that was not enough for the GUI, so I introduced a new command line switch for the core executable (-g), which meant that computation was started from the GUI (and not manually from the command line). If the core saw that, then it printed not only 10, but 100 stars along the computation procedure, and it printed all of the stars to new lines. That way I got a better resolution progress bar for the GUI.

So I started the computation executable as a separate process from the GUI, but I had the problem that I didn't get proper feedback. I played a lot with the various properties of C#'s Process class, but either I didn't have any feedback, or just some feedback after the computation ended (which could be a long time). I had to attack the problem more times to finally find the problem: it was on the command line tool's side. To be able to receive continuous feedback, I have to turn off the stdout buffering with a special instruction in the core:

setvbuf(stdout, NULL, _IONBF, 0);

I use this only in case of the GUI operation (-g command line switch), because otherwise the tool looks fine.

On the C# side, I have these settings:

computeProcess = new Process();
String computeCommandParams = " -g -q";
// Configure start info
computeProcess.StartInfo.FileName = "Compute.exe";
// Set UseShellExecute to false for stdout redirection.
computeProcess.StartInfo.UseShellExecute = false;
// Redirect the standard output of the sort command.
// This stream is read asynchronously using an event handler.

computeProcess.StartInfo.RedirectStandardOutput = true;
computeProcess.StartInfo.RedirectStandardError = true;
// Set command line arguments
computeProcess.StartInfo.Arguments = computeCommandParams;
// Set our event handler to asynchronously read the sort output.
computeProcess.OutputDataReceived += new DataReceivedEventHandler(ComputeOutputHandler);
computeProcess.ErrorDataReceived += new DataReceivedEventHandler(ComputeErrorHandler);
computeProcess.StartInfo.CreateNoWindow = true;
//computeProcess.StartInfo.ErrorDialog = true;
computeProcess.EnableRaisingEvents = true;
computeProcess.Exited += new EventHandler(ComputeExitedHandler);

 

 

Published by Csaba Toth on 2010 Mar. 14, Sun tocsa
Print

Newest Blog Posts

  1. 2014-es bejegyzés
    2014 Jan. 26, Sun 00:14 CET
  2. Régóta nem posztoltam már
    2013 July 22, Mon 02:49 CEST
  3. Nyugis behívós hét
    2011 Aug. 22, Mon 03:58 CEST

Switch Language