Friday, February 26, 2010

Progression Webcast Q&A

Yesterday we did our first of three webcasts on the upcoming release of Advantage version 10. I would like to thank everyone that attended our live webcast. If you missed the webcast yesterday you can see a replay here. You can register for the webcasts using this form.

We had a hand full of questions during the webcast. I have listed these questions and answers below.

  • When will the Version 10 Beta be available?

The Advantage 10 beta is scheduled to be released in late March or early April.

  • When will Version 10 be available?

We normally do a six to eight week beta test cycle. Our current plan is to release Advantage 10 in the second quarter of 2010. We expect it to be available by the end of June.

  • Will Advantage support Hibernate?

We are investigating several ORM solutions for a future version of Advantage. Hibernate is one option we have looked at. If you need this functionality you should vote for Hibernate support on our feedback site 

J.D. Mullin, the Advantage R&D manager, has some additional comments about his blog about ORM.

  • Why does the .dat file need to be stored on the client (as opposed to the server only)?

Both the server and client need to have access to the Unicode functions and collation sequences when manipulating Unicode data. These functions and collations were intentionally left as separate files so they don’t have to be distributed by developers using only ASCII character sets. The Unicode collations are stored in the icudt401.dat file which must be installed on the server and all clients.

  • We still have users with a mix of DOS based programs and Windows based programs sharing a common ADS database. Upgrading users from 8 to 9 caused significant differences. Will there be more with v10. In particular we noted that reindexing ADS files with v9 Windows generates different indexes when called from a DOS program+adsdosip.exe as the transport layer.

We do not expect there to be any significant differences in behavior between Advantage 9 and 10. Your applications should work the same with either version. By participating in the beta program you can identify any issues and provide feedback to the R&D team. This will allow us to provide the best possible migration path when the server is released.

You should always review the "Effects of Upgrading" section of the help file for useful information when changing the version of Advantage that you are using.

  • Will Advantage be using UTF-8 or UTF-16?

Advantage will store Unicode characters in the tables with UTF-16 encoding. However, you can apply any encoding you need once the data has been retrieved from the server.

The next webcast in the series will be on March 25th at 10AM EST and 2PM EST, I hope to see you there.

Wednesday, February 24, 2010

Pure and Simple Webcast Series

Tomorrow is the first in a three part series on Advantage version 10. You will have two opportunities to view the presentation and ask questions live. The first webcast is at 10AM EST and the second will be at 2PM EST. The description for tomorrow's webcast is below:

Advantage Database Server 10 | Pure and Simple

Over the past 15 years, the Advantage Database Server has become the heart of many data management applications with millions of deployments worldwide. Spring of 2010 will be significant for Advantage developers and users alike. Advantage Database Server 10 will be unveiled featuring key performance and usability enhancement and progression enablement like never before. Join us for this 3-part webcast series taking a close look at how Advantage Database Server 10 can impact your business for the greater good, pure and simple.

Webcast 1: Progression

Advantage grows as you grow. V10 supports current technologies allowing you to stay ahead in the market including support for Windows 7, Visual Studio 2010, Delphi 2010, entity .NET framework, Unicode, and now 64-bit clients to round out our already popular 64-bit servers.

You can register for the webcast using this link.

If you cannot attend tomorrow you can view a recording of the webcast next week. I'll post a link as soon as it is available.

Wednesday, February 17, 2010

Copying Tables (Part Deux)

In Copying Tables I discussed the different ways you can copy data from one table to another table using both SQL and the Advantage API. In this post I am going to highlight some differences between the two approaches.

First I want to mention one API I did not discuss in my first post was AdsCopyTableStructure. This function creates a new structure with the same structure as the given table. No data is copied from the original table with this function. You can accomplish the same thing with an SELECT INTO statement by setting a filter condition that returns no records.

Now back to the main topic of this post, the different functionality of the API and SQL. SQL offers some more flexibility because it allows for creating a new table with a different structure than the original table. This is done by specifying columns in the Select List of a SELECT . . . INTO statement. The new table will be created with the specified columns in the specified order. This allows you to reorganize your columns or simply copy some of the columns from a table.

You can also combine the results from several tables into a single table, think views. You can use all of your normal SQL syntax to create joins and have the results dumped into a physical table. This can be done "on the fly". You can get similar functionality with the API by using a view handle instead of a table handle. Additionally SQL allows you to create temporary tables whereas the API only creates physical tables.

// Create Table with records in a specific order
SELECT * INTO EmployeeByHireDate FROM Employee
// Create a table using a joined resultset
SELECT c.CustID, c.CustNum, c.LastName, c.FirstName, c.CustomerSince, 
       cm.Name as CompanyName, cm.SalesRep INTO CustomerList
FROM Customer c INNER JOIN Company cm ON c.CompanyID = cm.CompanyID
// Create a temporary table based  on a physical table 
SELECT * INTO #Dept102 FROM Employee
WHERE Department = 102

The ACE API includes one function that cannot be done with an SQL command, converting a table. AdsConvertTable converts the specified table between the supported Advantage formats: DBF/NTX, DBF/CDX, DBF/VFP and ADT. Like the other table copying APIs AdsConvertTable can respect or ignore filters. Allowing you to create a new table in the specified format with a portion of the records from the source table.

I use this function a lot when re-creating issues sent in by customers. I usually have data that I can use to test but often it is in ADT tables. By using AdsConvertTable I can quickly convert the data to DBF or VFP tables to help resolve the issue. As with the other API calls this function is fairly straightforward.

uint lRetVal = ACE.AE_SUCCESS;
IntPtr hConnect;
IntPtr hTable;
// connect to a directory of free tables
lRetVal = ACE.AdsConnect60("C:\\Data", ACE.ADS_REMOTE_SERVER, "", "",
                            ACE.ADS_DEFAULT, hConnect);
if (lRetVal != ACE.AE_SUCCESS)
    // Handle error here
// Open the table to be converted
lRetVal = ACE.AdsOpenTable(cnADS.ConnectionHandle, "C:\\Data\\MyTable.ADT",
                           sCurrentTable, ACE.ADS_ADT, ACE.ADS_ANSI,
                           ACE.ADS_DEFAULT, out hTable);
if (lRetVal != ACE.AE_SUCCESS)
    // Handle error
// Convert the table to a VFP table
lRetVal = ACE.AdsConvertTable(hTable, ACE.ADS_IGNOREFILTERS, 
                              "C:\\Data\\MyTable.DBF", ACE.ADS_VFP);
if (lRetVal != ACE.AE_SUCCESS)
    // Handle error

Although the sample code above is in C# the ACE API can be accessed from virtually every Advantage client. In fact, you can call AdsCopyTable, AdsCopyTableStructure, AdsCopyTableContents and AdsConvertTable directly from a TAdsTable component in Delphi. There is also a TAdsBatchMove component which can be used when moving large amounts of records between tables.

Monday, February 15, 2010

Copying Tables

There are a few ways to quickly copy tables using Advantage. You can copy the contents of one table to another table using SQL or using the Advantage API. These methods are very efficient since the operations are performed entirely on the server. You can use either method to create a new table and copy the data. You can also copy data from one table to another with or without a filter.

SQL is probably the simplest way to copy the contents of one table to another. There is no real programming involved you can simply use the SQL window in Advantage Data Architect (ARC). The following examples demonstrate copying the contents of a table to a new table and an existing table.

// Copy the contents from a table into a new table 
SELECT * INTO Employee_b FROM Employee
// Copy the contents from a table into an existing table
INSERT INTO Employee_c SELECT * FROM Employee
// Copy a portion of the records using a WHERE clause
SELECT * INTO Finance_Employees FROM Employee
  WHERE Department = 102
INSERT INTO Finance_Employees SELECT * FROM Employee
  WHERE Department = 102

There are two Advantage API calls that copy tables; AdsCopyTable and AdsCopyTableContents. AdsCopyTable creates a new table with the same structure and data as the original. AdsCopyTableContents copies the records from the first table to another table with the same structure.

You can limit the records that are copied by specifying a filter, scope or both on the table to be copied. You must then set the filter option to respect filters or respect scopes. ADS_IGNOREFILTERS ignores all filters and scopes, ADS_RESPECTSCOPES respects scopes and ignores filters and ADS_RESPECTFILTERS respects filters and scopes.

The following code is a C# example of using these functions. There are C examples and Delphi examples available in the help file. I left out the error checking for brevity, however, you should always check the return value of any ACE call. To get access to the ACE namespace add using AdvantageClientEngine at the top of your cs file.

uint lRetVal = ACE.AE_SUCCESS;
IntPtr hConnect;
IntPtr hSourceTable;
IntPtr hDestinationTable;
// connect to a directory of free tables
lRetVal = ACE.AdsConnect60("C:\\Data", ACE.ADS_REMOTE_SERVER, "", "", 
                            ACE.ADS_DEFAULT, hConnect);
if (lRetVal != ACE.AE_SUCCESS)
    // Error occurred
    // use AdsGetLastError to retrieve the error message
// Open the source table
lRetVal = ACE.AdsOpenTable(hConnect, "C:\\Data\\Table1.adt", "", ACE.ADS_ADT, ACE.ADS_ANSI,
                           ACE.ADS_DEFAULT, out hSourceTable);
// Create a new table with the same structure as the source table
lRetVal = ACE.AdsCopyTable(hSourceTable, ACE.ADS_RESPECTFILTERS, "C:\\Data\\Backup\\TableCopy.adt");
// Open the destination table
lRetVal = ACE.AdsOpenTable(hConnect, "C:\\Data\Backup\\Table2", "", ACE.ADS_ADT, ACE.ADS_ANSI,
                           ACE.ADS_DEFAULT, out hDestinationTable);
// Copy the contents from source to destination
lRetVal = ACE.AdsCopyTableContents(hSourceTable, hDestinationTable, ACE.ADS_RESPECTFILTERS);

Here is a simple function that will return the error message for a given error code.

private string GetAdsError(uint uiErrCode)
    char[] ucBuf;
    ushort usBufLen = ACE.ADS_MAX_ERROR_LEN;
    ucBuf = new char[usBufLen];
    ACE.AdsGetErrorString(uiErrCode, ucBuf, ref usBufLen);
    return new string(ucBuf, 0, (int)usBufLen);

Monday, February 8, 2010

Book Review – Rocket Surgery Made Easy

RocketSurgery_Krug[2]I read Steve Krug's first book Don't Make Me Think last year, which is a very good discussion of web design and how to make your web site more usable. So when his new book Rocket Surgery Made Easy came out I picked up a copy right away. This book discusses usability testing, in particular do it yourself usability testing. The book is short ( ~150 pages ) but contains a lot of practical advice on usability testing. You can see the table of contents using this link.

The book is divided into three sections; finding usability problems, fixing usability problems and the road ahead. The bulk of the book discusses how to find usability problems. Not surprisingly Steve's method for finding usability problems is by doing usability testing. Although the book focuses mostly on web sites it applies to any user interface.

Chapter one gives a great overview of the types of usability testing that can be done. It emphasizes that many usability tests can be done by anyone. This do-it-yourself usability testing can be done by just about anyone on a very limited budget and it brings great reward.

Chapter two provides a link to an actual usability test conducted by Steve. Watching this test is a very compelling argument since it really looks quite simple. It is a 25 minute video which you can view here.

The next six chapters break down the how-to of a usability test. Chapter three discusses the frequency of usability tests. His recommendation is once a month. If you are doing agile development then you should do a usability test every sprint. Basically test as early in the development cycle as possible.

Chapter four discusses what to test. Basically this is anything that works as designed. This fits really well with agile development since you have testable things with every sprint. With "traditional" development it may be more difficult to identify items that are ready to test but by keeping to a regular testing schedule developers will get things ready to test.

Chapters five through seven discuss how to find testers, writing test scenarios and a set of useful checklists. There is a lot of good information in these chapters but essentially anyone can be a tester. Two big recommendations are to use your current customers and to never use the same tester twice. The checklists are also a great guideline to ensure that the tests are done the same way every time.

I really enjoyed chapter eight which discusses how to be a test moderator.  There is some excellent advice on keeping the user on task without telling them what to do. This is crucial since your trying to find out if your web page or application is intuitive. Users should be able to work their way through a task without someone telling them what to do.

Chapter nine discusses who should be involved in the usability test. Basically it should be open to anyone who has an interest in the project that is being evaluated. However, the observers need to be separate from the test participant. The test should be conducted with a participant and a moderator in the room. Everyone else watches from another room using screen sharing software. Although recordings of usability tests are valuable watching the test live is the best choice.

The next section chapters ten through thirteen discuss how to go about fixing the problems identified by the usability tests. Chapter ten gives some good advice for conducting the wrap up meeting which is done after a set of tests has been completed. This is the time to identify the biggest usability issues. Chapter twelve is a list of some of the most common usability problems.

Chapter eleven discusses doing "the least you can do". Basically you should make the smallest changes that make the most impact. A full redesign is rarely needed and even more rarely accomplished. By making small changes you can quickly overcome many of the most problematic usability issues. Additionally small changes are easier to implement, quicker to complete and easy to measure.

Chapter thirteen discusses some of the roadblocks to making changes to improve usability. This chapter contains some interesting observations on why things don't get fixed. Although they sound very well thought out and reasonable they really don't hold up under close scrutiny. If you have a serious usability problem in your product shouldn't that have the highest priority. That clever new feature might be fantastic but it doesn't do much good if your users can't use it properly.

The last section of the book discusses some additional resources and provides a good summary of do-it-yourself usability testing. Chapter fourteen discusses desktop sharing software. Chapter fifteen is a list of additional reading on usability testing, some of which might make my crowded reading list. Chapter sixteen sums up all of Steve's maxims of usability testing.

The bottom line; I enjoy Steve's light hearted approach to writing. He presents his material in a very entertaining fashion and keeps it brief and to the point. If you have considered doing usability testing on your project but are scared of the cost this provides some great advice for getting good feedback on a budget. This book is a quick read and makes you believe in the value of usability testing and is a convincing argument for working it into your schedule.

Steve Krug did a presentation at the Business of Software conference on "The Least You Can Do About Usability" in 2008. You can view the video on the Business of Software web site.

Friday, February 5, 2010

Who Does the Testing

As I discussed in my last post there are many types of testing, but who should be doing the testing? This depends on what type of test is being conducted. Unit tests, regression tests and integration tests can be automated in many cases. They were probably written by the developers during development or by the QA department.

Automated tests are extremely good at demonstrating that the features and functionality work correctly. They can also be used to measure performance. By always running the same tests regressions can be identified quickly and we can be assured that existing features continue to work as new features are added.

However, there are many tests that simply cannot be automated, especially if your application contains a lot of UI elements. Sure there are programs that click buttons in a specific sequence and can "observe" the results. But, this isn't the same as a person interacting with the product. When a real person is clicking the buttons they an also evaluate how good the workflow is, how easy it is to move through the steps of the task and most importantly provide qualitative feedback.

These types of testers are a rare find. A good QA person needs to be someone who enjoys working on repetitive tasks or at the very least configuring several complex environments. The tester needs to be able to evaluate not only that the software is performing correctly but that it is doing the job in a usable way. Joel Spolsky has some advice on choosing testers.

If your program has a significant user interface you should include some Usability Testing into your schedule. These tests should be conducted in a semi-controlled environment with little direction. A user should be given a task to complete with the software and the session should be observed and/or recorded. This allows the developer(s) to see how easy their interface is to use. It can also identify places where additional error handling or explanation is necessary.

These kinds of tests should not be conducted by the engineer who built the interface. Let's face it, the interface that I build may make perfect sense to me because I know what it is supposed to do and how it does it. Using the form may not be as obvious to someone else working with the product. A sequence of actions to perform a task that is quite natural to me may be frustrating for many users. Jeff Atwood made a similar point in his article "Open Source Software, Self Service Software" when discussing self-service checkout.

There are certain rituals to using the self-service checkout machines. And we know that. We programmers fundamentally grok the hoops that the self-service checkout machines make customers jump through. They are, after all, devices designed by our fellow programmers. Every item has to be scanned, then carefully and individually placed in the bagging area which doubles as a scale to verify the item was moved there. One at time. In strict sequence. Repeated exactly the same every time. We live this system every day; it's completely natural for a programmer. But it isn't natural for average people. I've seen plenty of customers in front of me struggle with self-service checkout machines, puzzled by the workings of this mysterious device that seems so painfully obvious to a programmer. I get frustrated to the point that I almost want to rush over and help them myself. Which would defeat the purpose of a.. self-service device.

I related to this example right away. It seems very natural to me to do things step by step since that is the way we have to write programs. Most people don't break tasks down into these small chunks. A good tester can identify where these steps seem onerous which allows the developers to streamline the task.

Finally if you need some convincing on hiring testers take a look at Top Five (Wrong) Reasons You Don't Have Testers.

Wednesday, February 3, 2010

The Power of Testing

Testing is an often dreaded but very necessary part of the development process. It can be a tedious task but it can really pay off in the long run. I think this (de)Motivational poster sums it up nicely.


I realize that there isn't time to test everything. Like all programming tasks they need to be added into the schedule. Therefore, you need to choose the tests that best suit your situation. Here are a few examples.

  • Unit Testing
  • Regression Testing
  • Integration Testing
  • Usability Testing

Unit testing was brought to the forefront with the concept of Test-driven Development (TTD). TTD encourages developers to write a unit test first then create the method. This ensures that every method has a test written for it. If you are trying to apply this to an existing project it can be a very large task. For new projects there are many tools available which which help automate the process.

Creating unit tests for every function can be a time consuming process and needs to be balanced with generating the code that actually does the work. This can be further complicated since you need a positive and negative test in many cases. You need to ensure your function returns the correct results for valid input as well as proper handling of invalid input. Achieving the right amount of unit tests can be a somewhat daunting task.

Two types of tests which are almost always used are regression and integration tests. Regression testing ensures that changes to the code don't break any existing functionality. This is a step up from unit level tests in that it demonstrates that all of the units work together properly. Testing of new features is generally referred to as Integration Testing since it tests the addition of new features with the old.

When you are developing a program that will be used by people frequently Usability Testing is a valuable tool. Many people think that this can be a very time consuming and expensive type of testing to do. However, it doesn't have to be. You can conduct informal usability tests by simply grabbing someone in the hall and asking them to look at a form or web page that you just completed. If they can use the form then your on the right track.

Other simple usability tests can be conducted in a few hours using screen sharing software and a little help from your customers. This allows your developers to watch a user work with the program and identify good and bad aspects of the interface. I believe this kind of feedback is invaluable, it will lead to improvements and is well worth the time investment.

There are many other categories of tests that can be run and ultimately the level of tests you perform is going to depend on many factors. Although testing is very important delivering a product to market is just as important. Do enough testing to ensure that your product will work correctly and effectively then get it out to your best testers; your customers.

Monday, February 1, 2010

FAQs – January 2010

Client Communication Errors

Clients can get disconnected from Advantage in many ways. The network connection could fail, the client could loose power or the user could simply close the program abnormally. If the server detects that the client has been disconnected it will log a 7020 error.

However, there are some communication errors that can generate an error on the server. In rare cases a 9094 error, which will also generate a crash dump file, can be caused by a client communication failure. This should be resolved in version which added a new error code 7213 which does not generate a crash dump file. This error is logged when the client gets a temporary or non-fatal communication error when sending a request to the server.

Two knowledge base articles that address this issue have been posted: 7213 Error Logged in the Advantage Error Log and Error 9094 Logged and Dump File Generated

"Empty" Date Fields in VFP Tables

When a date field is empty in a Visual FoxPro (VFP) table the Advantage ODBC Driver will return 12/30/1899 as the date. This is the default return value when an Xbase field contains an empty value.

You can avoid this situation in one of two ways. VFP allows fields to be created to support NULLs. When the field is created this way Advantage will return NULL if a value has not been defined.

Alternately you can check the field for an empty value by using the EMPTY() function. This is available as both an SQL scalar function as well as a filter expression. The EMPTY() function will return true if the field contains the Xbase empty value for the given field type.


Can a Server Replicate to Itself?

Yes, an Advantage server with replication enabled can replicate to itself. You simply have to create another database on the server and configure it as the subscriber. This could be very useful in keeping an up to the minute backup of your data. Create the second database on another drive within the server so if the primary drive failed you would have a current backup immediately available.

You could also replicate to an external drive or to a Network Attached Storage (NAS) device for an additional measure of security.

Creating a Mini-Dump for a Specific Error Code

Advantage automatically creates a crash dump file whenever a 9000 class error occurs. These files assist the R&D team in determining what is going on with the server when the error occurred. Generally this allows us to quickly determine the problem and recommend solutions.

You can also create a mini-dump when you want more information about why a specific error code is being generated. This is done by adding some additional keys to the registry. Create registry entries named MINIDUMPERROR, MINIDUMPFILE, and MINIDUMPLINE in the ADS registry location of HKLM\SYSTEM\CurrentControlSet\Services\Advantage\Configuration.

To generate the log file just find the Error Number, File Name and Error Line you wish to create the dump for in the Advantage Error Log (ADS_ERR.adt). Then do whatever operation that caused the error again. Advantage should generate a file named adsdump*.gz on the root of the C drive, or the location configured in the error log path.

For more information about this process see this knowledge base article.