fiware - "null" name is given to both folder and file when sinking data from Orion to Cosmos using Cygnus -


i have issue related ngsi2cosmos data flow. works fine when persisting information received in orion public instance of cosmos, destination folder , file name both "null".

simple test follows:

  • i create of brand new ngsientity these headers added: fiware-service: myservice & fiware-servicepath: /my
  • i add new subscription cygnus reference endpoint.
  • i send update created ngsientity

when check user space in cosmos check following route has been created: /user/myuser/myservice/null/null.txt

file content ok, every updated info in orion has been correctly sinked it. problem folder , file names. can't make work properly. isn't supposed entityid , entitytype folder , file naming?

component versions:

  • orion version: contextbroker-0.19.0-1.x86_64
  • cygnus version: cygnus-0.5-91.g3eb100e.x86_64
  • cosmos: global instance

cygnus conf file:

cygnusagent.sources = http-source cygnusagent.sinks = hdfs-sink  cygnusagent.channels = hdfs-channel  #============================================= # source configuration # channel name write notification events cygnusagent.sources.http-source.channels = hdfs-channel # source class, must not changed cygnusagent.sources.http-source.type = org.apache.flume.source.http.httpsource # listening port flume source use receiving incoming notifications cygnusagent.sources.http-source.port = 5050 # flume handler parse notifications, must not changed cygnusagent.sources.http-source.handler = es.tid.fiware.fiwareconnectors.cygnus.handlers.orionresthandler # url target cygnusagent.sources.http-source.handler.notification_target = /notify # default organization (organization semantic depend on persistence sink) cygnusagent.sources.http-source.handler.default_organization = org42 # number of channel re-injection retries before flume event discarded cygnusagent.sources.http-source.handler.events_ttl = 10 # management interface port (fixme: temporal location parameter) cygnusagent.sources.http-source.handler.management_port = 8081 # source interceptors, not change cygnusagent.sources.http-source.interceptors = ts # timestamp interceptor, not change cygnusagent.sources.http-source.interceptors.ts.type = timestamp # destination extractor interceptor, not change cygnusagent.sources.http-source.interceptors.de.type = es.tid.fiware.fiwreconnectors.cygnus.interceptors.destinationextractor$builder # matching table destination extractor interceptor, not change cygnusagent.sources.http-source.interceptors.de.matching_table = matching_table.conf  # ============================================ # orionhdfssink configuration # channel name read notification events cygnusagent.sinks.hdfs-sink.channel = hdfs-channel # sink class, must not changed cygnusagent.sinks.hdfs-sink.type = es.tid.fiware.fiwareconnectors.cygnus.sinks.orionhdfssink # comma-separated list of fqdn/ip address regarding cosmos namenode endpoints cygnusagent.sinks.hdfs-sink.cosmos_host = 130.206.80.46 # port of cosmos service listening persistence operations; 14000 httpfs, 50070 webhdfs , free choice inifinty cygnusagent.sinks.hdfs-sink.cosmos_port = 14000 # default username allowed write in hdfs cygnusagent.sinks.hdfs-sink.cosmos_default_username = myuser # default password default username cygnusagent.sinks.hdfs-sink.cosmos_default_password = mypassword # hdfs backend type (webhdfs, httpfs or infinity) cygnusagent.sinks.hdfs-sink.hdfs_api = httpfs # how attributes stored, either per row either per column (row, column) cygnusagent.sinks.hdfs-sink.attr_persistence = column # prefix database , table names, empty if no prefix desired cygnusagent.sinks.hdfs-sink.naming_prefix = # hive fqdn/ip address of hive server cygnusagent.sinks.hdfs-sink.hive_host = 130.206.80.46 # hive port hive external table provisioning cygnusagent.sinks.hdfs-sink.hive_port = 10000  #============================================= # hdfs-channel configuration # channel type (must not changed) cygnusagent.channels.hdfs-channel.type = memory # capacity of channel cygnusagent.channels.hdfs-channel.capacity = 1000 # amount of bytes can sent per transaction cygnusagent.channels.hdfs-channel.transactioncapacity = 100 

i think should configure matching_table of cygnus define path , file name.

you have file in same path of cygnus agent conf file.

you can follow next example:

# integer id|comma-separated fields|regex applied fields concatenation|destination|dataset # # available "dictionary" of fields is: #  - entitydid #  - entitytype #  - servicepath  1|entityid,entitytype|room\.(\d*)room|numeric_rooms|rooms 

Comments

Popular posts from this blog

toolbar - How to add link to user registration inside toobar in admin joomla 3 custom component -

linux - disk space limitation when creating war file -

How to provide Authorization & Authentication using Asp.net, C#? -