When you have something like SyntaxNet tagging part of speech tags, that would be a natural language processing task: you're using a model created from neural networks to DO something. Common tasks that you can do are part of speech tagging, syntactic parsing, word sense disambiguation etc. Something like part of speech tagging is not often the end goal. It is merely a preliminary step towards something else. As the documentation of SyntaxNet describes, SyntaxNet provides the basis for Natural Language Understanding systems. You typically use part of speech tags for some other purpose down the language processing pipeline.
As for features, when you build a model, you need features. Features are some sort of representation of the natural language text. For example, Bag-of-words approaches commonly use sequences of words (n-grams) as features and then get counts for each of these sequences or simply mark whether it occurs at all in a given section of text. You can extract features in very complex ways if you want to. For example, you could use your part of speech tag output from SyntaxNet as features for constructing a model that does word sense disambiguation.
If anything doesn't make sense, just let me know. I'll try to write up some examples.