Drill集群安装

浏览: 1834

1.环境

1.1  jdk

/opt/jdk1.7.0_79

1.2Drill版本

/opt/apache-drill-1.7.0

1.3zookeeper地址

SZB-L0032013:2181,SZB-L0032014:2181,SZB-L0032015:2181

1.3Drill集群服务器

10.20.25.199   SZB-L0032015 

10.20.25.241   SZB-L0032016 

10.20.25.137   SZB-L0032017 


2.Drill集群

2.1 drill-override.conf配置更改

drill.exec: {   cluster-id: "szb_drill_cluster",   zk.connect: "SZB-L0032013:2181,SZB-L0032014:2181,SZB-L0032015:2181" }

2.2 drill-env.sh添加配置

export HADOOP_HOME="/opt/cloudera/parcels/CDH"

2.3启动集群

各个节点执行该命令: ./drillbit.sh start 

 1.Drill确认:

 [root@SZB-L0032017 bin]# ./drillbit.sh status drillbit is running. 

 2.Zookeeper配置确认:

 ./zookeeper-client -server SZB-L0032013:2181 

 [zk: SZB-L0032013:2181(CONNECTED) 0] ls / 

 [drill, hive_zookeeper_namespace_hive, hbase, zookeeper]

 [zk: SZB-L0032013:2181(CONNECTED) 1] quit 

 3.Web确认:   可以看出Drill集群3个节点已经配置和启动正常。

Clipboard Image.png

2.4查询测试

1.本地文件查询: 

 select * from dfs.`/opt/apache-drill-1.7.0/sample-data/nation.parquet` LIMIT 20 

2.创建hdfs插件并测试: 

 $ hdfs dfs -mkdir /user/drill 

 $ hdfs dfs -chown -R root:root /user/drill

 $ hdfs dfs -chmod -R 777 /user/drill

 $ hdfs dfs -copyFromLocal /opt/apache-drill-1.7.0/sample-data/nation.parquet /user/drill 

  select * from hdfs.` /user/drill/nation.parquet` LIMIT 20

 3.创建hive插件并测试 SELECT * FROM hive.`student` 

 4.创建hbase插件并测试

          SELECT CONVERT_FROM(row_key, 'UTF8') AS id, CONVERT_FROM(student.info.name, 'UTF8') AS name, CAST(student.info.age AS INT) AS age FROM hbase.`student`   

          说明:因为Hbase数据是基于UTF8编码和二进制数据存储,所以查询的时候进行编码和数据转换操作。 

3.Tableau访问

3.1安装MapR提供的ODBC驱动

3.2配置

cluster-id: szb_drill_cluster zk.connect: SZB-L0032013:2181,SZB-L0032014:2181,SZB-L0032015:2181

Clipboard Image.png

3.3 Tableau选择其他ODBC链接即可

4.HDFS高可靠的插件配置

4.1插件内容

{
"type": "file",
"enabled": true,
"connection": "hdfs://mycluster",
"config": null,
"workspaces": {
"root": {
"location": "/",
"writable": false,
"defaultInputFormat": null
},
"tmp": {
"location": "/tmp",
"writable": true,
"defaultInputFormat": null
}
},
"formats": {
"psv": {
"type": "text",
"extensions": [
"tbl"
],
"delimiter": "|"
},
"csv": {
"type": "text",
"extensions": [
"csv"
],
"delimiter": ","
},
"tsv": {
"type": "text",
"extensions": [
"tsv"
],
"delimiter": "\t"
},
"parquet": {
"type": "parquet"
},
"json": {
"type": "json",
"extensions": [
"json"
]
},
"avro": {
"type": "avro"
},
"sequencefile": {
"type": "sequencefile",
"extensions": [
"seq"
]
},
"csvh": {
"type": "text",
"extensions": [
"csvh"
],
"extractHeader": true,
"delimiter": ","
}
}
}

4.2hdfs配置文件

[root@HDP03log]# cp hdfs-site.xml /usr/local/apache-drill-1.7.0/conf/

[root@HDP03log]# cp core-site.xml /usr/local/apache-drill-1.7.0/conf/

[root@HDP03log]# cp mapred-site.xml /usr/local/apache-drill-1.7.0/conf/

5.日志聚合, drill-override.conf配置更改

drill.exec: {
cluster-id: "szb_drill_cluster",
zk.connect: "SZB-L0032013:2181,SZB-L0032014:2181,SZB-L0032015:2181",
sys.store.provider.zk.blobroot: "hdfs://SZB-L0032014:8020/user/drill/pstore"
}
推荐 1
本文由 平常心 创作,采用 知识共享署名-相同方式共享 3.0 中国大陆许可协议 进行许可。
转载、引用前需联系作者,并署名作者且注明文章出处。
本站文章版权归原作者及原出处所有 。内容为作者个人观点, 并不代表本站赞同其观点和对其真实性负责。本站是一个个人学习交流的平台,并不用于任何商业目的,如果有任何问题,请及时联系我们,我们将根据著作权人的要求,立即更正或者删除有关内容。本站拥有对此声明的最终解释权。

0 个评论

要回复文章请先登录注册