Generate an Java-based Ajax-enabled Web App in 5 Minutes

Over the last couple of years there has been a lot of talk about Ruby on Rails and Grails and how easy it is to use them to quickly build an application.  How would you like to be able to have all of that speed of development, but have it in a technology that you already know?  If you are familiar with Struts or JSF, you can use Rev to quickly build an application for you.

Rev is a code generation tool developed by Vgo Software, and it gives you the power to do that.  All you need is a database, a JDBC driver (most of the common ones are provided with Rev out of the box), a JDK of version 1.5 or better and Rev.  Using Rev you’ll be able to generate a completely functional CRUD application based on the tables that you select.  The output can be in a variety of different flavors: JSF, Struts, JSF with AJAX, JDBC, EJB, Hibernate, etc.  Rev also generates ANT build scripts for a variety of popular application servers so you can build and deploy your application directly from the tool.

What good is a CRUD application?  It all depends on what type of application you are building.  For adding testing data or building some Administration screens for a system, the Rev output may be all you need.  If you are building a more complicated system, then maybe the persistence layer is all you need and you can rework most of the UI layer.  All of the source code is available for you to modify as you see fit, so whether it is the final application itself or the basis for something bigger you will always have something to start with.

One of the unique features of Rev is the ability to customize the generation.  Not only can you easily customize the stylesheet from within Rev, but if you want to go deeper you can customize the templates that Rev uses to generate virtually whatever you’d like.  From modifying the JSP pages that get generated to creating a whole new set of templates for a completely different language, you can do it all!  In fact, included with Rev is a set of templates for generating a PHP-based application.

You can download your free trial of Rev at the Vgo Software site.  Also, be sure to sign up for the webinar I will be presenting on June 30th at 11:00 a.m. EST.  During that webinar I will demonstrate how to use Rev and talk about the various output options.

Share/Save/Bookmark

XML Schema Design: Part 3

This is Part 3 of a 3 part series on XML Schema Design.  Check out Part 1 or Part 2.

I recently helped complete a project for a large enterprise and this series was inspired by that work and some of the questions that were raised during that process. This last part of the series covers some ways to make your schema design more flexible.

Reasons to make it more flexible were covered in Part 1, but the basic idea is adopted from evolution. If your solution is extendable and adaptable, it will encourage more people in your organization to use it., ensuring its survival. Ideally different applications within the enterprise will be able to make use of the schema without requiring an updated release of the XSD to adapt to the application’s specific needs.

Extendability

In order to achieve expandability within a single version of the schema it becomes necessary to have the types and elements within the schema allow the addition of different elements and even different attributes. This would allow a user of the schema to add their own elements to the schema without violating the schema definition and would therefore promote the schema’s use within the organization.

To provide extensibility to the schema, named complex types could have the following elements added to their definition:

<xsd:any namespace="##targetNamespace" processContents="strict" minOccurs="0" maxOccurs="unbounded" />
<xsd:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded" />

These element definitions will render the schema invalid if there are options elements appearing before their declaration. To prevent this from being an error, add a new element to encompass those generic elements. Your final definition would look like:

   <xsd:complexType name="ExtraData">
       <xsd:sequence>

           <xsd:any namespace="##targetNamespace" processContents="strict" minOccurs="0" maxOccurs="unbounded" />
           <xsd:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded" />
       </xsd:sequence>
   </xsd:complexType>

The addition of these two element definitions allow for any other elements of the target namespace to be added to the type as well as any other elements from any other namespace to be added to the type. Finally to allow for any other attributes to be added the following attribute definition could be added to named complex types:

<xsd:anyAttribute namespace="##any" processContents="skip" />

A Better Approach?

This method of expandibility works but does so by allowing for almost any XML constructs to be added to XML files in that ExtraData element. This may not always be what you want. Instead, by being careful to abstract out just enough to make the schema flexible, you may be able to achieve the same thing.

For instance, consider an XML Schema that contains many different discreet data points. Let take a simple user profile type definition for instance:

   <xsd:complexType name="UserProfileType">
       <xsd:sequence>
           <xsd:element name="FirstName" type="xsd:string"></xsd:element>
           <xsd:element name="LastName" type="xsd:string"></xsd:element>
           <xsd:element name="AccountCreated" type="xsd:dateTime"></xsd:element>
       </xsd:sequence>
       <xsd:attribute name="userId" type="xsd:string"/>
   </xsd:complexType>

An example of an instance of an XML document that is validates against this schema definition might be:

  <Profile userid="rjava">
     <FirstName>Rob</FirstName>
     <LastName>Java</LastName>
     <AccountCreated>2009-05-26T09:00:00</AccountCreated>
  </Profile>

This might work fine for a while, but what happens when you want to keep track of the last time the user accessed the site? You would have to change the schema definition. The solution discussed above would work, but what if we create a generic data type:

   <xsd:complexType name="DataType">
       <xsd:sequence>
           <xsd:element name="Description" type="xsd:string" minOccurs="0"/>
           <xsd:element name="StringValue" type="xsd:string" minOccurs="0"/>
           <xsd:element name="DateValue" type="xsd:dateTime" minOccurs="0"/>
       </xsd:sequence>
       <xsd:attribute name="name" type="xsd:string" />
   </xsd:complexType>

We could use this generic data type inside our UserProfileType:

   <xsd:complexType name="UserProfileType">
       <xsd:sequence>
           <xsd:element name="Data" type="tns:DataType" minOccurs="0" maxOccurs="unbounded"></xsd:element>
       </xsd:sequence>
       <xsd:attribute name="userId" type="xsd:string"/>
   </xsd:complexType>

We could now represent the same data using an XML document like this:

  <Profile userid="rjava">
     <Data name="FirstName">
        <StringValue>Rob</StringValue>
     </Data>
     <Data name="LastName">
        <StringValue>Java</StringValue>
     </Data>
     <Data name="AccountCreated">
        <DateValue>2009-05-26T09:00:00</DateValue>
     </Data>
     <Data name="Email">
        <StringValue>[email protected]</StringValue>
     </Data>
  </Profile>

So now we have made the UserProfileType very fluid, maybe too fluid depending on what you want to accomplish.  It is exapandable simply by adding whatever instances of Data into the UserProfileType element in your XML Document, but it doesn’t require any fields or even suggest any.  A better use may be to combine the first two examples.  That way you could enforce the required fields and make optional some of the more common fields, but still leave room for other elements that new applications may require.

Conclusion

It is very important that the Canonical XML Schema be as easy as possible to understand while still maintaining re usability and flexibility.  A Canonical XML Schema is only going to be as good as it’s user-base is large.  There isn’t much point in investing in one if the organization as a whole is not going to adopt it.

Hopefully this series of articles has given you some ideas on how to design an XML Schema that your organization can make use of.  Getting the business as a whole to adopt something like this isn’t going to be easy, especially if you don’t have an immediate need for one.  My suggestion would be to start small, start with new applications that requires an XML Schema.  Prove the value in a proper enterprise-wide design by showing how the time and effort for enhancements and changes can be reduced and you will go a long way it getting it adopted.

Share/Save/Bookmark

XML Schema Design: Part 2

Now that we’ve gotten the whats and whys out of the way, we can start to talk about the guidelines themselves.

Naming Conventions
Naming conventions should be used in order to provide the understandability required in any good schema.

Names for all elements, attributes and types should be explicit.  Abbreviations should only be used where such abbreviations are obvious to anyone familiar with the domain. Any XML Types should be suffixed by the word Type. Elements should not include any suffix.

UpperCamelCase should be used in naming XML Elements and Types. This means that the first letter of each word that makes up the name will be capitalized included the first letter of the name. For example: CompanyName, AddressType, FirstName could all be valid names within the schema.

XML Attributes should be named using LowerCamelCase in which the first letter of the name is in lowercase and each additional word within the name will start with an uppercase letter.

The truth is, you can use any naming convention you want as long as it is consistent.  Consistency is the foundation of any good design and XML Schemas are no exception.

Elements Versus Attributes

Since the XML Specification allows for the same types of information to be stored as values for attributes or values for elements, a long standing debate is whether one should prefer attributes over elements for content and vice-versa. For a Canonical XML Design that requires flexibility and the ability to expand in the future, it is recommended that elements be created for most content and that attributes are only used to provide descriptors for those elements when necessary.

Another best practice to follow is to use an element to represent data that can stand on its own, independent of any parent element and use attributes to represent properties or meta-data of that element. For example. a Contact element may have a type attributed to it that tells the user what kind of contact information is provided, i.e. e-mail address or phone number, etc.

As you begin the process of abstracting elements to allow for expandability, you may find that it may make sense to store use elements in cases where you had used attributes before.  That is okay.  More on that when we discuss abstracting the schema.

Types Versus Elements

Another decision that needs to be made in designing an XML Schema is when to use types and when to use elements. In the case of a Canonical XML Schema, XML Types should be used extensively to make the schema easier to understand and easier to re-use. In cases where a generic XML Type could be used, the need to create a type is obvious. However, in cases where the element or type will most likely not be reused, it is not normally necessary to create a type. Having an explicit type definition will allow potential users of the schema to more easily interpret the design. So as a general rule, do not use anonymous complex types.

Data Types

User-derived types are composites (Complex) or subsets (Simple) of existing types. The extensions are used to consistently constrain the schema so that its use becomes easier. For example, a CurrencyAmount type can be created so that whenever a currency amount is needed in the schema, the generic type can be used. This way if the definition of the type changes, it can be changed in one place.  It also means that the developer of extensions to the schema does not need to think about how currency amounts will be constrained because it has already been done.

Use of Simple Types

Simple user-derived types are subsets of existing types. These types constrain the lexical or value space of the parent type. An example of a simple type is a type that limits the value of an element to a list of values. A use for this may be in limiting the Currency type values to standard iso currency types.

Use of Complex Types

Complex Types are user-derived types that allow various elements to be combined and represented as a whole. A Complex Type is also necessary if you wish to create a type that uses an attribute. To continue our example from above, the CurrencyAmount may be a Complex Type that includes both the type of currency (i.e., US Dollar or Euro) and the amount (i.e. 123,000) as elements within it.

In order to provide a grouping of elements an xml sequence is used. This sequence contains the elements included in the type and the order in which they are included. Elements within the list have two important attributes, minOccurs and maxOccurs which are used to indicate if the element is required and how many times it may appear in the sequence.

File Structure

When a large schema is being created, it is best to use references to include the various complex types of the schema in one definition. This allows the definition to be split up among multiple files and makes the reading of those definitions much easier. It also becomes easier to reuse portions of the schema because you can reference the subset of complex types or elements that you wish to include.

Stay Tuned for Part 3…

Part 3 will discuss some ideas on making the schema extendable via abstraction and walking the line between extendable and understandable.

If you haven’t already, check out Part 1 of the series.

Share/Save/Bookmark

Oracle acquires Sun: Who needs to look out now?

As a Java developer who does a lot of work with Oracle products including Jdeveloper and ADF, my head is still spinning a little from the news that Oracle is buying Sun Microsystems.

Oracle buying BEA hurt a little, though it was completely expected and a great move on Oracle’s part, I was a little sad to see the application server competition field drop by one but I was very happy that Oracle was smart enough to choose Weblogic.  At that point it was really the only the decision they could make.

With Oracle buying Sun there is a lot of synergy, there are many technologies that are duplicated among both companies.  Oracle owning both should make those technologies better and enable them to compete with the leaders in those respective areas.  The big ones that stick out for me:

!. Oracle’s JDeveloper and Sun’s NetBeans

Could they really afford to drop NetBeans, probably not, but can they afford to drop JDeveloper, no, not really.  Here the only thing that really makes sense is to merge the two, probably adding in the ADF wizards and goodies like that into NetBeans.  At least, that is what I hope they do.  JDeveloper isn’t bad, but I only ever use it to develop ADF projects and I bet many, many people are in that same boat.  Combining the two could end up giving Eclipse a run for it’s money, hopefully the competition just spurs both to be better.

2. Oracle’s Oracle VM and Sun’s Virtual Box

I haven’t had much experience with Oracle VM, but I have lately become a huge fan of Sun’s Virtual Box.  It’s a great product and it lets me do everything I want for free.  Will this continue to be the case?  I don’t know.  I’m not an expert on virtualization in the enterprise, I use it for desktop VMs, but I hadn’t seen much about Virtual Box working in that space.  I would imagine Oracle VM is all about virtualizing the network and competing with VMWare on that level.  With the two together VMWare’s got some competition.

3.  Oracle’s Unbreakable Linux and Sun Solaris

Oracle had a great jumpstart to their linux platform basing it on the RedHat codebase way back when.  Solaris was my first exposure to any type of Unix (Solaris and AIX, actually) and it has been around forever.  If the adoption of Linux has hurt anything, it’s probably been Solaris and through that, sales of Sun’s hardware.  Oracle says that their owning of Solaris will enable them to tune the Oracle Database software to run even better on it, and since according to Oracle, most of their database customers are using Solaris, I think they’ll probably do that.  I have no idea what will have to Unbreakable Linux though.  Who has to look out with this one?  I’d say IBM.  Buying Sun probably would have been good for them in the products space, I think the only area IBM is going to be competing in future is going to be services.  RedHat has Ubuntu to worry about on the desktop side and now a bigger threat from Oracle and Sun on the server-side, they have their work cut out for them.

4.  Oracle Database and Sun’s MySQL

MySQL has a huge customer base, most of them probably non-paying.  I think with this one, Oracle just adds it to their ever increasing repetoire of niche databases.  It won’t go away, but I see less adoption in the future, maybe a boost for PostgreSQL if they can get their act together.

5. Sun’s Java and Oracle’s ADF

Oracle has always been a big player in the specifications for the Java language.  I’m sure someone else will go into all the details, because I honestly don’t know them off the top of my head, but I do know that many technologies and ideas that ADF is based on where either approved JSR’s or close to approved JSR’s.  Does Oracle’s acquisition of Sun and Java mean that they will be better equipted to push trhough whatever they want to add to the language?  Well, I don’t think it will be quite that easy, but I’m sure it makes it easier.

I’ve always been a Java guy at heart, I work with Oracle technology sometimes, and I think they have really come a long way, but Oracle owning Java does kind of scare me a little.  One thing Oracle does really well, and JDeveloper is great at this, is making complex technologies easy to use.  It is what Microsoft does really well.  .NET makes easy the things that Java makes hard.  ADF actually does a lot of the same.  The combination of ADF and Java together could pose a big threat to Microsoft’s .NET if Oracle does it right.

My first thought about Oracle owning Java is that many developers are going to jump up and down about it and complain.  Some will probably jump ship, maybe to .NET but probably to Ruby or PHP or something else.  I don’t think many coroporations are going to change the direction of their IT departments though, so for them, it will be .NET or Java as it always has.  In the end, I thnk most Java developers are going to remain Java developers and hopefully Oracle’s backing of Java will just end up making it a better language to work with.

Microsoft might have more to worry about with Oracle owning Open Office now also.  I hope that Oracle continues to invest in it, or it’ll end up being Microsoft Office vs. Google Apps and that’s about it.  I’m all for cutting edge, but Gmail hasn’t come out of Beta yet and I’d like to see Microsoft have some competiion in this area.

So I wanted to get my thoughts out there while they were floating around in my head and hopefully yours so I could hear your opinions on the topic.  Please let me know what you think about this acquistion and what you think it means to the future of technology and competition in the field.

Share/Save/Bookmark

Simple Fix for Tomcat on Windows

I finally found the answer to one of life’s most difficult questions — why is it that everytime I redeploy a WAR file in Tomcat it fails?  The answer is that in Windows, Tomcat will hold a lock on certain files in the web application.  There is a simple fix to this dilemna, edit the context.xml file in your TOMCAT_HOME\conf directory.

Add 2 attributes to the Context element in the file, so that the element description looks like this:

<Context antiResourceLocking=”true” antiJARResource=”true”>

Problem solved.  You should be able to just copy a new version of the war file on top of the old one in the webapps directory of Tomcat and it will redeploy.

This works for me on Windows XP using apache-tomcat-6.0.18.

Share/Save/Bookmark

XML Schema Design: Part 1

Introduction

This post and the posts that follow are to provide some of my guidelines and best practices for creating and utilizing an enterprise-wide XML Schema.  I will start off with some background in this post, then move on to the guidelines and best practices in future posts.  Jack Van Hoof has a great article about Canonical Data Models (CDMs) and what they are good for on his blog.  This enterprise-wide XML Schema is an implementation of a CDM and will hereafter be referred to as a Canonical XML Schema.

Background
The primary requirement of a Canonical XML Schema and the related data model is to provide a standard format for which all content will be distributed thereby requiring applications to adhere to this common format.  If a new application is added to the platform, only a transformation between the Canonical XML Schema will be needed to allow it to produce or consume the required content.

In addition, 5 criteria should be considered:

  1. Completeness – The entirety of elements in the source schemas should be present in the new schema.
  2. Minimalism – Each element should be defined only once.
  3. Expandability - The schema should be able to anticipate data that may not have originally been found in any of the source schemas, that is, it should allow its use to grow and not hinder the use of it in the future.
  4. Comprehension – The schema should be formulated in a way the allows for easy browsing and querying.
  5. Performance - Understanding how the content in the XML documents supported by the schema will be used can help in determining some of the structure within the schema.  For instance, if one intended use of the produced XML is to provide rapid searching, then the schema should be structured to support fast searches.

Keep in mind that these criteria are often at odds with one another.  For example, designs that emphasize expandability do so at the risk of deemphasizing performance and comprehension.

Why Guidelines?

A current problem with the XML content that is currently being produced and consumed by various applications within many enterprises is a lack of standards and guidelines for the creation of such content.  A Canonical XML Schema will enforce adherence to a singular structure thereby enforcing adherence to the guidelines and best practices set forth by the schema itself.  In addition, the Canonical XML Schema must be built following guidelines and best practices.  The guidelines and best practices need to be documented to allow producers and consumers of XML content to understand why the model is designed the way it is and how to expand upon that design when it is necessary to do so.

Think about a group of systems that have grown over the years and are communicating with each other via XML (or even without XML). Once there are more than 2 systems talking to each other, it makes sense to develop as much of a generic communication pipeline as possible and a Canonical XML Schema will help you do that.

Applications Communicating without a Canonical XML Schema

Communication without a Canonical XML Schema

You can see in the picture above, that in the enterprise described there are 9 translations of data being performed, one for each pairing of applications. As applications are added, the number of translations grows exponentially.

Applications Communicating without a Canonical XML Schema

Communication using a Canonical XML Schema

In the second diagram, only 6 translations are being performed and the number of translations that need to be performed as new systems are put online grows in a linear fashion.  As new applications are added, only one translation of data needs to be performed, either from the new application to the Canonical XML Schema (if it’s a producer) or from the Canonical XML Schema to the new application (if it’s a consumer).

Next…

Part 2 will describe some of the best practices and guidelines and Part 3 will go into more depth around abstraction of elements and walking the thin line between expandable and understandable.


Share/Save/Bookmark

JavaFX Flashcard Example Updated for 1.1

I have updated the sourcecode for my JavaFX flashcard game example to be compatible with release 1.1.  There certainly were a lot of changes from the pre-release to the release and this serverely impacted the example code.  So severely, in fact, that it was just chock full of compile erros and other issues.  I will attempt to cover most of the changes by explaining the resulting new classes.

The actual structure of the program hasn’t changed, The FlashcardApp is the main program, it builds an array of Flashcard objects.  Each Flashcard object is made up of a Word and a CardImage.  The Word is then made up of individual letters.

Let’s start by taking a look at the Letter.fx file.  This Letter.fx is a little better designed than the last one.  In this case, instead of extending the generic CustomNode I extended Text.  After doing that I was able to just override the variables I was interested in. I could set the alignment, font, color and relative position this way. Another change that had to be made was to drop using attribute and use var instead.

I stored the actual letter as a String because I wanted the option of tying two characters together to make one sound (called bends (”cl” or “st”) or diagraphs (”sh” or “ch”)).

The letter is class is repsonsible for displaying the letters and thus needs to know where to position them, therefore the size and position numbers are provided so it can determine where in the canvas to put the letters.

public class Letter extends Text {
    public var myLetter: String; //String the represents a Letter in the Card
    public var position: Number; //Position the letter is in in the current word
    public var size: Number; //Size of the entire word
    override var textAlignment = TextAlignment.CENTER;
    override var textOrigin = TextOrigin.TOP;
    override var translateX = bind ((scene.width - 40) - (((scene.width-40)/size) * (size+1 - position)));
    override var translateY = bind (scene.height - scene.height) / 2 ;
    override var font = Font{ name: 'Arial', size: 150 };
    override var fill = Color.BLACK;
    override var content = bind myLetter;

}

The Word.fx file did not change too much except to extend CustomNode and override the translateX and translateY variables to position the word correctly.

It contains an operation to flip the card, one to show the Word (to reset the state), and a constructor. The interesting part here is the “constructor”. As I mentioned in a previous post, there is no real constructor per say, instead you define a method with the same name as the class and this is used as a constructor.

public class Word extends CustomNode {
    var letters: Letter[];
    var show:Boolean = true;
    override var translateX = bind (scene.width - 40 - boundsInLocal.width) / 2 ;
    override var translateY = bind (300 - boundsInLocal.height)/2 - 20 ;

    public var word: String = null on replace {
       var aSize = word.length();
       var i = 0;
       while (i < aSize) {
            var nLetter = Letter { myLetter:word.substring(i,i+1) position:i+1 size:aSize};
            insert nLetter into letters;
            i++;
       }
    }

    public function flip() {
        show = not show;
    }

    public function showWord() {
        show = true;
    }

    public override function create(): Node {
        return Group {
            visible: bind show
            content: [
                letters
            ]
        };
    }
}

The CardImage file is similar in the changes there. I also made use of fitWidth and fitHeight to make all my images consistent sizes in the ImageView.

The show boolean is used in the CardImage class and the Word class to indicate whether or not it should be visible. This is what allows the flip() function to work in FlashCardApp by just setting one to true and the other to false.

public class CardImage extends CustomNode {
    override var translateX = bind (scene.width - 40 - boundsInLocal.width) / 2 ;
    var show: Boolean = false;

    public var imageSrc: String = null on replace {
        show = false;
    }

    public function flip() {
        show = not show;
    }

    public function hideImage() {
        show = false;
    }

    public function CreateCardImage(anImage:String) : CardImage {
        var cardImage1 = CardImage { imageSrc: anImage
        }
        return cardImage1;
    }

    public override function create():Node {
        var cardImage = Image {
            url: imageSrc
        }
        return ImageView {
            translateY: 25
            fitWidth: 500
            fitHeight: 250
            visible: bind show
            image: cardImage
        }

    };

}

Nothing too exciting in the FlashCard.fx file either, but I’ll include here for the sake of completeness. Note the flip function here as mentioned above. Also, this is the class where the sound integration is done. It isn’t as much sound as I originally intended as it was originally meant to be able to sound out individual letters or letter groups. I don’t think that would be too difficult to implement but I haven’t done it yet and probably won’t since my children have outgrown this little game already.

public class FlashCard extends CustomNode {
    public var cardImage: CardImage;
    public var showFront: Boolean;
    public var cardWord: Word;
    public var mp3: Mp3;

    public var aWord:String = null on replace {
       cardWord = Word{ word:aWord };
    }

    public var anImage:String = null on replace {
        cardImage = CardImage{imageSrc:anImage};
    }

    public var aMp3:String = null on replace {
        mp3 = new Mp3(aMp3);
    }

    public function showWord() {
        cardWord.showWord();
        cardImage.hideImage();
    }

    public function flip() {
        cardWord.flip();
        cardImage.flip();
        mp3.play();
    }

    public override function create():Node {
        return Group{
            content: [
            Rectangle {
                x: 10
                y: 10
                height: 300
                width: 550
                arcHeight: 20
                arcWidth: 20
                fill: Color.WHITE
                stroke: Color.BLACK
                strokeWidth: 2
            }
            ,
                Group {
                    content: [ cardWord,
                            cardImage ]
                }
            ]
        };
    }

}

FlashCardApp.fx had a couple of changes and some things of interest. First of all, instead of using a Frame, it uses a Stage component, as the Frame no longer exists. Inside the VBox I had to put the content inside a Group in order for the displayed FlashCard to be updated when the index changed. Essentially I needed to bind the “card” variable and I could not do that without it being inside a Group component.

The last thing I had to change was to remove a slash from the pointers to the images and mp3’s. The old insert statement looked like this:
insert FlashCard{aWord:”cat”, anImage:”{__DIR__}/img/cat.png”, aMp3:”{__DIR__}/sounds/cat.mp3″} into flashcards;

Note the “/” after the {__DIR__} in both instances. I had to remove that slash so that it now looks like:
insert FlashCard{aWord:”cat”, anImage:”{__DIR__}img/cat.png”, aMp3:”{__DIR__}sounds/cat.mp3″} into flashcards;

At first I thought this might have been because i was developing this new version on Linux, but it appears that it is necessary for Windows also. It appears the {__DIR__} includes a slash at the end of the directory and having 2 was a problem for the classes.

var flashcards:FlashCard[];
var myIndex:Integer = 0;
var size:Number = 5;
var card = bind flashcards[myIndex];
insert FlashCard{aWord:"cat", anImage:"{__DIR__}img/cat.png", aMp3:"{__DIR__}sounds/cat.mp3"} into flashcards;
insert FlashCard{aWord:"dog", anImage:"{__DIR__}img/dog.png", aMp3:"{__DIR__}sounds/dog.mp3"} into flashcards;
insert FlashCard{aWord:"car", anImage:"{__DIR__}img/car.png", aMp3:"{__DIR__}sounds/car.mp3"} into flashcards;
insert FlashCard{aWord:"hat", anImage:"{__DIR__}img/hat.png", aMp3:"{__DIR__}sounds/hat.mp3"} into flashcards;
insert FlashCard{aWord:"fish", anImage:"{__DIR__}img/fish.png", aMp3:"{__DIR__}sounds/fish.mp3"} into flashcards;

Stage {
    title: "Flash Card JavaFX"
    width: 600
    height: 400
    scene: Scene {
    fill: Color.GREY
    content: VBox {
        content: [
            Group { content: bind card },
            HBox {
                content: [
                    SwingButton {
                        translateX:0
                        translateY:10
                        text: "Next"
                        action: function() {
                            flashcards[myIndex].showWord();
                            if (myIndex < (size - 1)) {
                                myIndex++;
                            } else {
                                myIndex = 0;
                            }
                        }
                    },
                    SwingButton {
                        translateX:20
                        translateY:10
                        text: "Flip Me!"
                        action: function() {
                            flashcards[myIndex].flip();
                        }
                    }
                ]
            }

        ],
        }
    }

You can download all of the new sourcecode here. You will need the JLayer library for the Mp3 class to compile.

Be sure to check out the other articles on my blog related to JavaFX and the Flashcard game for more details on the Flashcard JavaFX code itself.

Stephin Chin has a good post about some of the changes from the pre-release to the release.

Share/Save/Bookmark

Accessing Twitter with Java using HttpClient

Today I am going to post about making authenticated REST calls using Java and the Jakarta Commons HttpClient library.  I am going to use Twitter as an example.  I originally wanted an example that would allow me to post simultaneously to Facebook, LinkedIn and Twitter, but Twitter is the only one that allows such access without going through unnecessary hoops, so here it is.

The HttpClient library is a very useful library that simplifies the process of making HTTP calls in Java.  The project has been around since 2001, originally as a subproject of Jakarta Comons.

The main reason to use the HttpClient library is that it makes calling authenticated HTTP sources easy.  Making HTTP calls in Java is fairly straightforward now anyway, it gets a little more difficult when authentication is involved.  The HttpClient library makes the whole process somewhat trivial, as this example shows.

Since there are two different types of calls to make, but both utilize the same foundation in HttpClient, I found it useful to create a helper method that actually sets up the authentication and executes the call that is passed to it.

Both the Get and the Post methods extend HttpMethodBase, so that will be the input argument.  It is assumed that the user name and password needed for authentication are attributes of the class this method is in and that they have already been set.

Here is the code for the helper method:

/**
 * Executes an Authenticated HTTP method (Get or Post)
 * @param method - the get or post to execute
 * @throws NullPointerException
 * @throws HttpException
 * @throws IOException
 */
private void executeAuthenticatedMethod(HttpMethodBase method)
  throws NullPointerException, HttpException, IOException {
  if ((twitteruser == null)||(twitterpwd == null)) {
    throw new RuntimeException("User and/or password has not been initialized!");
  }
  HttpClient client = new HttpClient();
  Credentials credentials =
    new UsernamePasswordCredentials(this.getTwitteruser(), this.getTwitterpwd());
  client.getState()
    .setCredentials(new AuthScope(twitterhost, 80, AuthScope.ANY_REALM), credentials);
  HostConfiguration host = client.getHostConfiguration();
  host.setHost(new URI("http://"+twitterhost, true));
  client.executeMethod(host, method);
}

Here you can see that all that is necessary to set up an authenticated call is to create some new credentials and assign them to the State of the client.  To configure the call it is necessary to set up the host by setting the HTTP address of the host, in this case “http://www.twitter.com”.  Finally the call that has been passed in to this method is executed.  The results can be access via the call that was passed in so there is no reason to return anything at this point.

The Get is really simple once this helper method is in place.  To demonstrate the get method, I will use the retrieval of a Friends Time LIne in Twitter.  This is an authenticated call as the post will be.

	/**
	 * Gets the 20 most recent statuses by friends (equivalent to /home on the web)
	 * @return a String representing the XML response.
	 */
	private String getFriendsTimeline() {
		String message = "";
		String url = "/statuses/friends_timeline.xml";
		GetMethod get = new GetMethod(url);
	    try {
	    	executeAuthenticatedMethod(get);
		message = message + get.getResponseBodyAsString();
		} catch (URIException e) {
			message = e.getMessage();
		} catch (NullPointerException e) {
			message = e.getMessage();
		} catch (IOException e) {
			message = e.getMessage();
		} finally {
			get.releaseConnection();
		}
		return message;
	}

Here, to use the get, all we need to do now is create the GetMethod with the correct url and pass it to our helper method to be executed. The response is simply returned as a String, which in this case is simply XML. One item to note, is that after the response has been retrieved, the connection is released. This should happen no matter what the status of the call so that the connection to the host is terminated and the resources are freed up.

The post is almost as simple as the get, the only difference in our call is that for the post we are including a parameter that is the Twitter status message to be posted.

	/**
	 * Update your Twitter Status
	 * @param state
	 * @return
	 */
	private String postStateToTwitter(String status) {
		String url = "/statuses/update.xml";
		String message = "";
		PostMethod post = new PostMethod(url);
		NameValuePair params[] = new NameValuePair[1];
		params[0] = new NameValuePair("status", status);
		post.setRequestBody(params);
		message = "Status:"+post.getStatusText() + " " + message;
	    try {
	    	executeAuthenticatedMethod(post);
		} catch (URIException e) {
			message = e.getMessage();
		} catch (NullPointerException e) {
			message = e.getMessage();
		} catch (IOException e) {
			message = e.getMessage();
		} finally {
			post.releaseConnection();
		}

		return message;
	}

To pass the parameter we use the NameValuePair class in the HttpClient library. Create an array of NameValuePair to set and set the names and values accordingly. In this case the name is “status” and the value is the String called status that is passed to the method. Again, remember to release the connection after the call has been made. For this particular method call we are not interesed in the response body, only that what the status of the call is. After running this, you should get a status of “OK” returned from the call.

It is all pulled together in the TwitterClient.java file which you can download here.

Check out the Twitter API Wiki for more information on the Twitter API.

Check out the HttpClient project home for more information about the HttpClient project.

If you are really bored, you can follow me on Twiiter.

Share/Save/Bookmark

Inserting Text into an Oracle Database Schema: ORA-01461

Today I ran into an interesting problem. A portion of our software saves the text of database DML statements into an Oracle schema. We had been allocating 4000 bytes of a VARCHAR2 column in order to store that text. We’ve been using this software to analyze customer’s applications for about a year now and hadn’t run into any difficulties with this setup.

Today, while persisting some data about a customer’s form application, our software started to blow up, throw Oracle ORA-01461 errors. The text of the error is “can bind a LONG value only for insert into a LONG column.” At first glance it seemed obvious enough, we were trying to insert a LONG value into a column that wasn’t a LONG column.

The first thing to realize is that a LONG value in Oracle’s database world is NOT a Long value in Java. Since there were about 6 of these types of fields being inserted into a table, that was my first bad assumption, that one of these was wrong. Of course, they all matched up to the columns in the table correctly, but assuming that something strange was going on in the driver, I manually switched these statements to setting BigDecimals instead of long’s. Naturally, this didn’t help at all.

I did turn out to be correct on one count, however. Something strange was happening in the driver. After being puzzled for a bit longer, I called in a developer to help me talk through the problem, and as often happens, just talking about the problem often brings new insight into the situation.

I realized while explaining to this developer that a LONG does not equals a Long, that the problem must lie in the text fields. Duh. So, since I was debugging at that point already, I inspected the values I was trying to insert and realized that one of the text fields was actually pretty large. So large, in fact, that the eclipse debugger couldn’t show me the entire value. It did let me know that the String i was looking at was almost 15,000 characters in length, however.

Aha! So, checking the table structure showed me that the column was set up for VARCHAR2(4000). Apparently, instead of the driver throwing an exception that the value was too large for the column, because the String is over 4000 characters long, it automatically converted it to a LONG datatype and tried to insert that resulting in the ORA-01461 error I was seeing.

To test this theory, I put in some code to check the length of the String and truncate it to 4000 characters if it was longer than that. Voila, no more errors. Of course, I’ll have some work to do to change the column type, probably to a CLOB like the rest of our really long text fields, but that is going to mean changing some code around since CLOBs are not as easily read from the database as VARCHAR2’s are, but at least the problem is isolated.

In my search for information about this error I did not see anything that mentioned this, so I thought I post it to hopefully help some other poor sucker that’s butting his head against his keyboard in frustration due to a similar issue. Hope this helps!

Share/Save/Bookmark

ADF 11g: Using Custom Properties To Create Update-Only View Objects

One of the cool features of the ADF Business Components layer in 11g is the ability to add custom properties to Entity or View objects. It’s a neat feataure but up until this point I hadn’t really any need to use them.

Then, when I was trying to implement a View object as only allowing updates, not inserts or deletes, I learned that there isn’t really a way to declaritively do this in ADF 11g. It seemed like one of those things that should be available, but the help says this:
“Some 4GL tools like Oracle Forms provide declarative properties that control whether a given data collection allows inserts, updates, or deletes. While the view object does not yet support this as a built-in feature in the current release, it’s easy to add this facility using a framework extension class that exploits custom metadata properties as the developer-supplied flags to control insert, update, or delete on a view object.”

The above quote is from section 37.11 of the Fusion Developer’s Guide for Oracle ADF. The section is actually titled “Declaratively Preventing Insert, Update, and Delete” which sounded like exactly what I wanted, but when i read the section I found that little bit of discouraging news. The last few words were encouraging, I thought using custom proerties to control insert, update, or deletes would be perfect, then I read on.

The next couple paragraphs seem to indicate that it would be a good idea to create instances of the view object that are called ViewObjectInsert, ViewObjectUpdate, and ViewObjectDelete, generic framework code could be used to look for these View instances and based on if a custom property is set then blah, blah, blah. I think it actually said something about looking up the phase of the moon also.

I’m not sure why you would want to have custom instances to determine whether or not customer properties should be looked up or not, why not just use custom properties? Anyway, that is the route I decided to take and it turned out to be pretty simple.

Chirs Muir had written an article on using custom properties to automatically convert the case of input and I used his article along with the help section of the Fusion Developer’s guide to come up with this solution. Thanks Chris!

1. Create a custom ViewObject implementation class.
This is done by creating a class that extends the Oracle View Object.

Create a New Java Class

Create a New Java Class

1a. Right click on the package you would like your custom class to reside in.
1b. Click on Simple File in the left pane and Java Class on the right pane.
1c. Name the file and make sure it extends the ViewObjectImpl class.

2. Create a method to check the custom property.

     private boolean isAllowed(String action) {
         boolean result = true;
         if (getViewDef() != null) {
             if (getViewDef().getProperty(action) != null) {
                String actionProperty = (String) getViewDef().getProperty(action);
                if (actionProperty != null) {
                    if ("false".equals(actionProperty)) {
                        result = false;
                    }
                }
             }
         }
         return result;
     }

3. Override the appropriate methods.
3a. Override the createRow method to check if Create is allowed, and if it isn’t throw an exception.

    public Row createRow() {
        if (isAllowed("insert")) {
            return super.createRow();
         } else {
            throw new JboException("Create not allowed in this view");
         }
    }

3b. Override the removeCurrentRow method in the same way.

    public void removeCurrentRow() {
        if (isAllowed("delete")) {
             super.remove();
         } else {
            throw new JboException("Delete not allowed in this view");
         }
    }

4. Add the necessary declarations to the View Object you wish to have these features.
4a. Add the following line to the attributes of the View Object to have it implement the framework class.

	ComponentClass="com.vgo.demo.framework.MyCustomViewObjectImpl"
Add Custom View Properties

Add Custom View Properties


4b. Add the necessary custom properties to the View Ojbect. Click on the General section of the Overview tab of the view object. Open the section for Custom Properties and click the green plus. Change the name to “insert” and the value to “false”. Click on the green plus to add another cusom property, name this one “delete” and set the value to “false”.

5. That’s it, it is that simple. Now run an Application Module that contains that View and try to insert or delete, when you do, you should see the exception that is thrown to inform the user that the action is not permitted.

Error Message for Insert Shown

Error Message for Insert Shown

As you can see, custom properties in ADF 11g are sure to prove extremely useful in the future, I am sure this is but one potential use for them.

Share/Save/Bookmark