Annotations

You will start by looking at annotations. You have already seen annotations in action when you looked at both inheritance and functional interfaces. For instance, you saw how an annotation could be placed before a method declaration to indicate it was overriding a method from a super-class or interface:

@Override

public void overriddenMethod() {

}

This annotation ensured that if the underlying method definition changed, you would receive a compilation error. This is useful, because it prompts you to also change your method definition.

One way of thinking about annotations is that they are extending the functionality of the compiler. If the underlying method definition was changed, your code would still be valid: it just wouldn’t be doing what you thought it was doing (overriding a method).

The annotations you have looked at so far are all built-in annotations, and are processed at compile time. In this chapter you will look at how you can write your own custom annotations, and access these at run-time.

The requirements for the program outlined at the start of this chapter stated that the name of the columns in the CSV file do not need to match the field names in the class. You will therefore create an annotation that can be used to specify the CSV column name that a setter method on a class relates to.

Start by creating a new project in Eclipse, I have called mine GenericProcessor. Next, right click on the project and choose New -> Annotation. Name the annotation Column, and place it in a package called processor.annotations.

You will notice that Eclipse generates code that looks like an interface, but contains an @ symbol:

public @interface Column {

}

The annotations you have looked at earlier in the book have been tagging annotations: they contained no information beyond the presence of the annotation. It is also possible to pass arguments to an annotation: in order to allow this you need to add a method signature to the annotation – the return type of this method will be the type accepted by the annotation. You will first add the ability for a column name from the CSV file to be specified:

public @interface Column {

      String comumnName();   

}

You will also add another parameter that can be used for specifying a date format if the column contains date information. In this case you will also provide a default value for the parameter:

public @interface Column {

    String comumnName();

    String format() default "yyyy-MM-dd";

}

Next, you need to specify how the annotation should be used. In this case the annotation will be used at runtime (you will access the annotation as the program executes), and that it should be applied to methods (rather than classes):

package processor.annotations;

import java.lang.annotation.ElementType;

import java.lang.annotation.Retention;

import java.lang.annotation.RetentionPolicy;

import java.lang.annotation.Target;

 

@Retention(RetentionPolicy.RUNTIME)

@Target({ElementType.METHOD})

public @interface Column {

    String comumnName();

    String format() default "yyyy-MM-dd";

}

Now that you have created your annotations, you can define a CSV file that you may want to process. The file you will use contains stock trading information for Oracle Corporation from 2014 to1996. This file is available on the book’s website, and is called ORCL.csv. This file should be placed in the base directory of your project. The first few lines of the file look like this:

Date,Open,High,Low,Close,Volume,Adj Close

2014-05-23,17.26,18.90,16.96,17.30,168500,17.30

2014-05-22,16.97,17.25,16.85,17.21,135500,17.21

2014-05-21,16.80,17.23,16.69,17.00,180200,17.00

2014-05-20,16.86,17.01,16.61,16.73,161700,16.73

2014-05-19,16.79,17.16,16.67,16.88,129100,16.88

2014-05-16,16.79,17.25,16.50,16.91,148200,16.91

2014-05-15,16.53,16.87,16.53,16.86,235500,16.86

2014-05-14,16.01,16.90,16.01,16.67,254100,16.67

2014-05-13,15.96,16.09,15.79,15.81,78000,15.81

2014-05-12,15.77,16.30,15.70,16.04,94200,16.02

2014-05-09,15.52,15.86,15.26,15.66,98900,15.64

2014-05-08,15.50,16.33,15.25,15.62,161300,15.60

As you can see, the file contains 7 columns. The first column is the date the row relates to, and the remaining columns contain a variety of price and volume information.

You will now write a class containing 7 fields, and map the setters for these fields to the columns in the file using the Column annotation. This ensures that the field names on the class can differ from the field names in the CSV file:

package processor;

import java.util.Date;

import processor.annotations.Column;

 

public class StockData {

    private Date date;

    private Double openingPrice;

    private Double highestPrice;

    private Double lowestPrice;

    private Double closingPrice;

    private Integer volume;

    private Double adjustedClosingPrice;

   

    public StockData() {}

 

    public Date getDate() {

        return date;

    }

    @Column(comumnName = "Date", format = "yyyy-mm-dd")

    public void setDate(Date date) {

        this.date = date;

    }

 

    public Double getOpeningPrice() {

        return openingPrice;

    }

    @Column(comumnName = "Open")

    public void setOpeningPrice(Double openingPrice) {

        this.openingPrice = openingPrice;

    }

 

    public Double getHighestPrice() {

        return highestPrice;

    }

    @Column(comumnName = "High")

    public void setHighestPrice(Double highestPrice) {

        this.highestPrice = highestPrice;

    }

 

    public Double getLowestPrice() {

        return lowestPrice;

    }

 

    @Column(comumnName = "Low")

    public void setLowestPrice(Double lowestPrice) {

        this.lowestPrice = lowestPrice;

    }

    public Double getClosingPrice() {

        return closingPrice;

    }

 

    @Column(comumnName = "Close")

    public void setClosingPrice(Double closingPrice) {

        this.closingPrice = closingPrice;

    }

 

    public Integer getVolume() {

        return volume;

    }

    @Column(comumnName = "Volume")

    public void setVolume(Integer volume) {

        this.volume = volume;

    }

 

    public Double getAdjustedClosingPrice() {

        return adjustedClosingPrice;

    }

    @Column(comumnName = "Adj Close")

    public void setAdjustedClosingPrice(Double adjustedClosingPrice) {

        this.adjustedClosingPrice = adjustedClosingPrice;

    }

 

}

You have now explicitly linked column names in a CSV file with field names in a class. The annotations still do not do anything, but they have annotated your code in such a way that you can now start writing code to take advantage of them.

You will now write a processor that can accept a class name and a file name, and construct a List of objects of the specified type from the file.

Before writing the code, you will put the basic structure in place, and look at how the processor will be invoked. The processor will have the following basic structure:

package processor;

import java.util.ArrayList;

import java.util.List;

 

public class FileProcessor<T> {

    List<T> processFile(Class className, String filename) {

        List<T> result = new ArrayList<>();

       

        return result;

    }   

}

And will be invoked as follows:

package processor;

public class Main {

    public static void main(String[] args) {

        FileProcessor<StockData> fileProcessor =

            new FileProcessor<>();

        fileProcessor.processFile(StockData.class, "ORCL.csv");

    }

}

To get started you need to write a method that looks at the class passed in, scans all the methods on the class, and determines which ones have Column annotations.

In order to achieve this you are going to use a feature called reflection. Reflection allows you to examine a class at runtime and determine which fields and methods it contains, and even to invoke instances of those methods on specific instances of the class.

Reflection is very useful for writing generic algorithms, because it allows code to make decisions at runtime based on the structure of the classes it has been given to work with. For instance, the FileProcessor in this example will not need any compile time knowledge of the StockData class: everything it needs to know about it will be discovered via reflection.

The following is the next version of FileProcessor. It constructs a map where the keys are the column names in the file (as discovered from the annotations), and the values are the related setter methods on StockData:

package processor;

import java.lang.annotation.Annotation;

import java.lang.reflect.Method;

import java.util.ArrayList;

import java.util.HashMap;

import java.util.List;

import java.util.Map;

import processor.annotations.Column;

 

public class FileProcessor<T> {

    List<T> processFile(Class className, String filename) {

        List<T> result = new ArrayList<>();

        Map<String, Method> headerMap = new HashMap<>();

        mapFieldNames(className, headerMap);

        return result;

    }

   

    private void mapFieldNames(Class c,

           Map<String, Method> headerMap) {

        for (Method method : c.getMethods()) {

            Column column = method.getAnnotation(Column.class);

            if (column != null) {

                headerMap.put(column.comumnName(), method);

            }       

  }

    }

}

In order to access all the methods on a given class, the code simply calls c.getMethods(). Each method is then represented as an instance of the Method class.

Notice also that once a Column annotation is identified on a method using getAnnotation, it is possible to extract its value for columnName as follows:

column.comumnName()

You will now write code to process the header line in the file, and determine which position each column name has. This will ensure that it is possible to change the order of the column names after the program is written, and still have the program function correctly. The method looks as follows:

private void mapColumnPositions(

        Map<Integer, String> headerPositionMap, String firstLine) {

    StringTokenizer st = new StringTokenizer(firstLine, ",");

    int i = 0;

    while (st.hasMoreTokens()) {

        headerPositionMap.put(i++, st.nextToken());

    }

}

You will look at how this is called shortly, but it will accept a Map and the header line from the file, and populate the Map with the position of each column.

Now, you will create a method that accepts each line of the file and processes it, returning an instance of the required class at the end.

private T processLine(Class className, String line,

        Map<Integer, String> headerPositionMap,

        Map<String, Method> headerMap) throws Exception {

    T t = (T)className.newInstance();

    int i = 0;

    StringTokenizer st = new StringTokenizer(line, ",");

    while (st.hasMoreTokens()) {

        String columnName = headerPositionMap.get(i);

        Method setter = headerMap.get(columnName);

        Parameter parameter = setter.getParameters()[0];

        if (parameter.getType().equals(Date.class)) {

            Column column = setter.getAnnotation(Column.class);

            DateFormat df = new SimpleDateFormat(column.format());

            setter.invoke(t, df.parse(st.nextToken()));

        } else if (parameter.getType().equals(Double.class)) {

            setter.invoke(t, Double.valueOf(st.nextToken()));

        } else if (parameter.getType().equals(Integer.class)) {

            setter.invoke(t, Integer.valueOf(st.nextToken()));

        } else {

            // assume it accepts a String

            setter.invoke(t, st.nextToken());

        }          

        i++;

    }

    return t;

}

This method is reasonably complex at first glance, but if you work through it line-by-line it is reasonably straightforward.

You start by constructing an instance of the class that will hold the data in the line (an instance of StockData in this case). You instantiate it using a mechanism you have not seen so far: using the newInstance method on the Class itself.

T t = (T)className.newInstance();

Next, you split the line based on commas, and process one token at a time. You use the index of the token to determine which column name you are processing, and access the appropriate setter for that column on the newly created object:

String columnName = headerPositionMap.get(i);

Method setter = headerMap.get(columnName);

Once you obtain a reference to the setter you need to determine what type of parameter it accepts. You know that a setter always accepts one argument; therefore you can access this via reflection also:

Parameter parameter = setter.getParameters()[0];

The next section of the method contains a set of if-else blocks that invoke the setter based on its type. For instance, if it accepts an Integer you invoke it as follows:

setter.invoke(t, Integer.valueOf(st.nextToken()));

The first block, which handles Date parameters, is the most complex, because in this case you also need to find the format of the date from the annotation using the following code:

Column column = setter.getAnnotation(Column.class);

DateFormat df = new SimpleDateFormat(column.format());

You can now put it altogether:

package processor;

 

import java.io.FileNotFoundException;

import java.io.FileReader;

import java.io.IOException;

import java.lang.annotation.Annotation;

import java.lang.reflect.InvocationTargetException;

import java.lang.reflect.Method;

import java.lang.reflect.Parameter;

import java.nio.file.Files;

import java.nio.file.Path;

import java.nio.file.Paths;

import java.text.DateFormat;

import java.text.ParseException;

import java.text.SimpleDateFormat;

import java.util.ArrayList;

import java.util.Date;

import java.util.HashMap;

import java.util.List;

import java.util.Map;

import java.util.StringTokenizer;

import java.util.logging.Level;

import java.util.logging.Logger;

import processor.annotations.Column;

 

public class FileProcessor<T> {

   

    List<T> processFile(Class className, String filename) {

        List<T> result = new ArrayList<>();

        Map<String, Method> headerMap = new HashMap<>();

        Map<Integer, String> headerPositionMap = new HashMap<>();

        mapFieldNames(className, headerMap);

        Path path = Paths.get(filename);

        try {

            Files.lines(path).limit(1).forEach(

               s -> mapColumnPositions(headerPositionMap, s));

            Files.lines(path).skip(1).forEach(s -> {

                try {

                    result.add(processLine(className,

                        s, headerPositionMap, headerMap));

                } catch (Exception e) {

                    Logger.getLogger(FileProcessor.class.getName()).

                         log(Level.SEVERE, "Error occurred", e);

                }

            });

        } catch (IOException ex) {}

        return result;

    }

   

    private void mapFieldNames(Class c, Map<String, Method> headerMap) {

        for (Method method : c.getMethods()) {

            Column column = method.getAnnotation(Column.class);

            if (column != null) {

                headerMap.put(column.comumnName(), method);

            }

        }

    }

   

    private T processLine(Class className, String line, Map<Integer, String> headerPositionMap, Map<String, Method> headerMap) throws Exception {

        T t = (T)className.newInstance();

        int i = 0;

        StringTokenizer st = new StringTokenizer(line, ",");

        while (st.hasMoreTokens()) {

            String columnName = headerPositionMap.get(i);

            Method setter = headerMap.get(columnName);

            Parameter parameter = setter.getParameters()[0];

            if (parameter.getType().equals(Date.class)) {

                Column column = setter.getAnnotation(Column.class);

                DateFormat df =

                   new SimpleDateFormat(column.format());

                setter.invoke(t, df.parse(st.nextToken()));

            } else if (parameter.getType().equals(Double.class)) {

                setter.invoke(t, Double.valueOf(st.nextToken()));

            } else if (parameter.getType().equals(Integer.class)) {

                setter.invoke(t, Integer.valueOf(st.nextToken()));

            } else {

                // assume it accepts a String

                setter.invoke(t, st.nextToken());

            }          

            i++;

        }

        return t;

    }

   

    private void mapColumnPositions(Map<Integer, String> headerPositionMap, String firstLine) {

        StringTokenizer st = new StringTokenizer(firstLine, ",");

        int i = 0;

        while (st.hasMoreTokens()) {

            headerPositionMap.put(i++, st.nextToken());

        }

    }

   

}

Notice that you are using the limit function on Streams to process the first line from the file:

Files.lines(path).limit(1).forEach(s -> mapColumnPositions(headerPositionMap, s));

And then using the skip function the second time you process the file to skip over the header.

Files.lines(path).skip(1).forEach(s -> {

    try {

        result.add(processLine(className, s,

             headerPositionMap, headerMap));

    } catch (Exception e) {

       Logger.getLogger(FileProcessor.class.getName()).

         log(Level.SEVERE, "Error occurred", e);

    }

});

Now that we have code to construct StockData instances, you can write a main method that uses the List returned to find:

  • The highest volume for any date

  • The date with the greatest difference between the opening and closing price

  • The highest and lowest prices in the file

package processor;

 

import java.util.List;

import java.util.OptionalDouble;

import java.util.OptionalInt;

 

public class Main {

    public static void main(String[] args) {

        FileProcessor<StockData> fileProcessor =

            new FileProcessor<>();

        List<StockData> data = fileProcessor.processFile(

              StockData.class, "ORCL.csv");

        // find the maximum volume

        OptionalInt resultVolume = data.stream().

              mapToInt(sd -> sd.getVolume()).max();

        // find the maximum difference between the opening price

        // and the closing price

        OptionalDouble resultPrice = data.stream().mapToDouble(

              sd -> sd.getClosingPrice()-sd.getOpeningPrice()).max();

        // find the minimum opening price

        OptionalDouble lowestPrice = data.stream().mapToDouble(

              sd -> sd.getOpeningPrice()).min();

        // find the maximum opening price

        OptionalDouble highestPrice = data.stream().mapToDouble(

              sd -> sd.getOpeningPrice()).max();

        System.out.println("Highest volume: "+

            resultVolume.getAsInt());

        System.out.println("Highest price change : "+

             resultPrice.getAsDouble());      

        System.out.println("Lowest price : "+

              lowestPrice.getAsDouble());

        System.out.println("Highest price : "+

              highestPrice.getAsDouble());

    }

}

Before ending this chapter, take a step back and look at the FileProcessor you have written. This knows nothing about the file it is going to be passed, or the class you are going to ask it to instantiate for each line in the file, yet it manages to perform the task through the combined power of annotations and reflection.

Code such as this is very valuable, because it can work with files and classes that have not even been written yet.

 
A Software Engineer Learns Java and Object Orientated Programming
titlepage.xhtml
part0000_split_000.html
part0000_split_001.html
part0000_split_002.html
part0000_split_003.html
part0000_split_004.html
part0000_split_005.html
part0000_split_006.html
part0000_split_007.html
part0000_split_008.html
part0000_split_009.html
part0000_split_010.html
part0000_split_011.html
part0000_split_012.html
part0000_split_013.html
part0000_split_014.html
part0000_split_015.html
part0000_split_016.html
part0000_split_017.html
part0000_split_018.html
part0000_split_019.html
part0000_split_020.html
part0000_split_021.html
part0000_split_022.html
part0000_split_023.html
part0000_split_024.html
part0000_split_025.html
part0000_split_026.html
part0000_split_027.html
part0000_split_028.html
part0000_split_029.html
part0000_split_030.html
part0000_split_031.html
part0000_split_032.html
part0000_split_033.html
part0000_split_034.html
part0000_split_035.html
part0000_split_036.html
part0000_split_037.html
part0000_split_038.html
part0000_split_039.html
part0000_split_040.html
part0000_split_041.html
part0000_split_042.html
part0000_split_043.html
part0000_split_044.html
part0000_split_045.html
part0000_split_046.html
part0000_split_047.html
part0000_split_048.html
part0000_split_049.html
part0000_split_050.html
part0000_split_051.html
part0000_split_052.html
part0000_split_053.html
part0000_split_054.html
part0000_split_055.html
part0000_split_056.html
part0000_split_057.html
part0000_split_058.html
part0000_split_059.html
part0000_split_060.html
part0000_split_061.html
part0000_split_062.html
part0000_split_063.html
part0000_split_064.html
part0000_split_065.html
part0000_split_066.html
part0000_split_067.html
part0000_split_068.html
part0000_split_069.html
part0000_split_070.html
part0000_split_071.html
part0000_split_072.html
part0000_split_073.html
part0000_split_074.html
part0000_split_075.html
part0000_split_076.html
part0000_split_077.html
part0000_split_078.html
part0000_split_079.html
part0000_split_080.html
part0000_split_081.html
part0000_split_082.html
part0000_split_083.html
part0000_split_084.html
part0000_split_085.html
part0000_split_086.html
part0000_split_087.html
part0000_split_088.html
part0000_split_089.html
part0000_split_090.html
part0000_split_091.html
part0000_split_092.html
part0000_split_093.html
part0000_split_094.html
part0000_split_095.html
part0000_split_096.html
part0000_split_097.html
part0000_split_098.html
part0000_split_099.html
part0000_split_100.html
part0000_split_101.html
part0000_split_102.html
part0000_split_103.html
part0000_split_104.html
part0000_split_105.html
part0000_split_106.html
part0000_split_107.html
part0000_split_108.html
part0000_split_109.html
part0000_split_110.html
part0000_split_111.html
part0000_split_112.html
part0000_split_113.html
part0000_split_114.html
part0000_split_115.html
part0000_split_116.html
part0000_split_117.html
part0000_split_118.html
part0000_split_119.html
part0000_split_120.html
part0000_split_121.html
part0000_split_122.html
part0000_split_123.html
part0000_split_124.html
part0000_split_125.html
part0000_split_126.html
part0000_split_127.html
part0000_split_128.html
part0000_split_129.html
part0000_split_130.html
part0000_split_131.html
part0000_split_132.html
part0000_split_133.html
part0000_split_134.html
part0000_split_135.html
part0000_split_136.html
part0000_split_137.html
part0000_split_138.html
part0000_split_139.html