博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
sqoop安装及使用
阅读量:4519 次
发布时间:2019-06-08

本文共 25319 字,大约阅读时间需要 84 分钟。

简介:

  sqoop是一款用于hadoop和关系型数据库之间数据导入导出的工具。你可以通过sqoop把数据从数据库(比如mysql,oracle)导入到hdfs中;也可以把数据从hdfs中导出到关系型数据库中。通过将sqoop的操作命令转化为Hadoop的MapReduce作业进行导入导出,(通常只涉及到Map任务)即sqoop生成的Job主要是并发运行MapTask实现数据并行传输以提升数据传送速度和效率,如果使用Shell脚本来实现多线程数据传送则存在很大的难度Sqoop2(sqoop1.99.7)需要在Hadoop安装目录下的配置文件中设置代理,属于重量级嵌入安装,文中我们使用qoop1(Sqoop1.4.6)。

 

前提:(若不知道如何安装请看我前面写的)

CloudDeskTop上安装了: hadoop-2.7.3  jdk1.7.0_79  mysql-5.5.32 sqoop-1.4.6 hive-1.2.2master01和master02安装了: hadoop-2.7.3 jdk1.7.0_79slave01、slave02、slave03安装了: hadoop-2.7.3 jdk1.7.0_79 zookeeper-3.4.10

一、安装:

 1、上传安装包到/install/目录下

2、解压:

[hadoop@CloudDeskTop install]$ tar -zxvf sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tar.gz -C /software/

3、配置环境:

[hadoop@CloudDeskTop software]$ su -lc "vi /etc/profile"

JAVA_HOME=/software/jdk1.7.0_79HADOOP_HOME=/software/hadoop-2.7.3SQOOP_HOME=/software/sqoop-1.4.6PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/lib:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SQOOP_HOME/binexport PATH JAVA_HOME HADOOP_HOME SQOOP_HOME

4、配置完环境后,执行如下语句,立即生效配置文件:

[hadoop@CloudDeskTop software]$ source /etc/profile

5、进入/software/sqoop-1.4.6/lib/目录,上传包

这个地方的数据库驱动包必须选择该版本(5.1.43),因为Sqoop需要对接MySql数据库,如果选择的数据库驱动包不是这个版本,很容易出错。

6、配置sqoop

[hadoop@CloudDeskTop software]$ cd /software/sqoop-1.4.6/bin/

[hadoop@CloudDeskTop bin]$ vi configure-sqoop

注释掉如下代码:用这个符号“:<<COMMENT”作为起始符,“COMMENT”作为结束符;

127 :<
View Code

 

二、启动(没说明的都默认是在hadoop用户下操作)

【0、在CloudDeskTop的root用户下启动mysql】

[root@CloudDeskTop ~]# cd /software/mysql-5.5.32/sbin/ && ./mysqld start && lsof -i:3306 && cd -

【1、在slave节点启动zookeeper集群(小弟中选个leader和follower)】

  cd /software/zookeeper-3.4.10/bin/ && ./zkServer.sh start && cd - && jps

  cd /software/zookeeper-3.4.10/bin/ && ./zkServer.sh status && cd -

【2、master01启动HDFS集群】cd /software/ && start-dfs.sh && jps

【3、master01启动YARN集群】cd /software/ && start-yarn.sh && jps

【YARN集群启动时,不会把另外一个备用主节点的YARN集群拉起来启动,所以在master02执行语句:】

cd /software/ && yarn-daemon.sh start resourcemanager && jps

 【4、查看进程】

 

【6、查询sqoop版本来判断sqoop是否安装成功】

 [hadoop@CloudDeskTop software]$ sqoop version

 

 

三、测试

  说明:导入与导出操作的方向是以HDFS集群为基准参考点来定义的,如果数据从HDFS集群流出则表示导出,如果数据流入HDFS集群则表示导入Hive表中的数据实际上是存储到HDFS集群中的,因此对Hive表的导入与导出实际上都是在操作HDFS集群中的文件。

首先,在本地创建数据:

在hive数据库建表后上传到集群中表存放数据的路径下:

[hadoop@CloudDeskTop test]$ hdfs dfs -put testsqoop.out /user/hive/warehouse/mmzs.db/testsqoop

 目标一、将hdfs集群的数据导入到mysql数据库中

1、在hive数据库mmzs中创建表,并导入数据

[hadoop@CloudDeskTop software]$ cd /software/hive-1.2.2/bin/[hadoop@CloudDeskTop bin]$ ./hivehive> show databases;OKdefaultmmzsmmzsmysqlTime taken: 0.373 seconds, Fetched: 3 row(s)hive> create table if not exists mmzs.testsqoop(id int,name string,age int) row format delimited fields terminated by '\t';OKTime taken: 0.126 secondshive> select * from mmzs.testsqoop;OK1    ligang    22    chenghua    33    liqin    14    zhanghua    45    wanghua    16    liulinjin    57    wangxiaochuan    68    guchuan    29    xiaoyong    410    huping    6Time taken: 0.824 seconds, Fetched: 10 row(s)

 2、在mysql数据库中创建相同字段的表

[root@CloudDeskTop bin]# cd ~[root@CloudDeskTop ~]# cd /software/mysql-5.5.32/bin/[root@CloudDeskTop bin]# ./mysql -uroot -p123456 -P3306 -h192.168.154.134 -e "create database mmzs character set utf8"[root@CloudDeskTop bin]# ./mysql -uroot -p123456 -h192.168.154.134 -P3306 -DmmzsWelcome to the MySQL monitor.  Commands end with ; or \g.Your MySQL connection id is 12Server version: 5.5.32 Source distributionCopyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or itsaffiliates. Other names may be trademarks of their respectiveowners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> show tables;Empty set (0.00 sec)mysql> create table if not exists testsqoop(uid int(11),uname varchar(30),age int)engine=innodb charset=utf8    -> ;Query OK, 0 rows affected (0.06 sec)mysql> desc testsqoop;+-------+-------------+------+-----+---------+-------+| Field | Type        | Null | Key | Default | Extra |+-------+-------------+------+-----+---------+-------+| uid   | int(11)     | YES  |     | NULL    |       || uname | varchar(30) | YES  |     | NULL    |       || age   | int(11)     | YES  |     | NULL    |       |+-------+-------------+------+-----+---------+-------+3 rows in set (0.00 sec)mysql> select * from testsqoop;Empty set (0.01 sec)

3、使用Sqoop将Hive表中的数据导出到MySql数据库中(整个HDFS文件导出)

[hadoop@CloudDeskTop software]$ sqoop-export --help

17/12/30 21:54:38 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6usage: sqoop export [GENERIC-ARGS] [TOOL-ARGS]Common arguments:   --connect 
Specify JDBC connect string --connection-manager
Specify connection manager class name --connection-param-file
Specify connection parameters file --driver
Manually specify JDBC driver class to use --hadoop-home
Override $HADOOP_MAPRED_HOME_ARG --hadoop-mapred-home
Override $HADOOP_MAPRED_HOME_ARG --help Print usage instructions-P Read password from console --password
Set authentication password --password-alias
Credential provider password alias --password-file
Set authentication password file path --relaxed-isolation Use read-uncommitted isolation for imports --skip-dist-cache Skip copying jars to distributed cache --username
Set authentication username --verbose Print more information while workingExport control arguments: --batch Indicates underlying statements to be executed in batch mode --call
Populate the table using this stored procedure (one call per row) --clear-staging-table Indicates that any data in staging table can be deleted --columns
Columns to export to table --direct Use direct export fast path --export-dir
HDFS source path for the export-m,--num-mappers
Use 'n' map tasks to export in parallel --mapreduce-job-name
Set name for generated mapreduce job --staging-table
Intermediate staging table --table
Table to populate --update-key
Update records by specified key column --update-mode
Specifies how updates are performed when new rows are found with non-matching keys in database --validate Validate the copy using the configured validator --validation-failurehandler
Fully qualified class name for ValidationFa ilureHandler --validation-threshold
Fully qualified class name for ValidationTh reshold --validator
Fully qualified class name for the ValidatorInput parsing arguments: --input-enclosed-by
Sets a required field encloser --input-escaped-by
Sets the input escape character --input-fields-terminated-by
Sets the input field separator --input-lines-terminated-by
Sets the input end-of-line char --input-optionally-enclosed-by
Sets a field enclosing characterOutput line formatting arguments: --enclosed-by
Sets a required field enclosing character --escaped-by
Sets the escape character --fields-terminated-by
Sets the field separator character --lines-terminated-by
Sets the end-of-line character --mysql-delimiters Uses MySQL's default delimiter set: fields: , lines: \n escaped-by: \ optionally-enclosed-by: ' --optionally-enclosed-by
Sets a field enclosing characterCode generation arguments: --bindir
Output directory for compiled objects --class-name
Sets the generated class name. This overrides --package-name. When combined with --jar-file, sets the input class. --input-null-non-string
Input null non-string representation --input-null-string
Input null string representation --jar-file
Disable code generation; use specified jar --map-column-java
Override mapping for specific columns to java types --null-non-string
Null non-string representation --null-string
Null string representation --outdir
Output directory for generated code --package-name
Put auto-generated classes in this packageHCatalog arguments: --hcatalog-database
HCatalog database name --hcatalog-home
Override $HCAT_HOME --hcatalog-partition-keys
Sets the partition keys to use when importing to hive --hcatalog-partition-values
Sets the partition values to use when importing to hive --hcatalog-table
HCatalog table name --hive-home
Override $HIVE_HOME --hive-partition-key
Sets the partition key to use when importing to hive --hive-partition-value
Sets the partition value to use when importing to hive --map-column-hive
Override mapping for specific column to hive types.Generic Hadoop command-line arguments:(must preceed any tool-specific arguments)Generic options supported are-conf
specify an application configuration file-D
use value for given property-fs
specify a namenode-jt
specify a ResourceManager-files
specify comma separated files to be copied to the map reduce cluster-libjars
specify comma separated jar files to include in the classpath.-archives
specify comma separated archives to be unarchived on the compute machines.The general command line syntax isbin/hadoop command [genericOptions] [commandOptions]At minimum, you must specify --connect, --export-dir, and --table
View Code

#-m是指定map任务的个数

[hadoop@CloudDeskTop software]$ sqoop-export --export-dir '/user/hive/warehouse/mmzs.db/testsqoop' --fields-terminated-by '\t' --lines-terminated-by '\n' --connect 'jdbc:mysql://192.168.154.134:3306/mmzs' --username 'root' --password '123456' --table 'testsqoop' -m 2
[hadoop@CloudDeskTop software]$ sqoop-export --export-dir '/user/hive/warehouse/mmzs.db/testsqoop' --fields-terminated-by '\t' --lines-terminated-by '\n' --connect 'jdbc:mysql://192.168.154.134:3306/mmzs' --username 'root' --password '123456' --table 'testsqoop' -m 217/12/30 22:02:04 INFO sqoop.Sqoop: Running Sqoop version: 1.4.617/12/30 22:02:04 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.17/12/30 22:02:04 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.17/12/30 22:02:04 INFO tool.CodeGenTool: Beginning code generation17/12/30 22:02:05 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `testsqoop` AS t LIMIT 117/12/30 22:02:05 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `testsqoop` AS t LIMIT 117/12/30 22:02:05 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /software/hadoop-2.7.3注: /tmp/sqoop-hadoop/compile/e2b7e669ef4d8d43016e44ce1cddb620/testsqoop.java使用或覆盖了已过时的 API。注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。17/12/30 22:02:11 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/e2b7e669ef4d8d43016e44ce1cddb620/testsqoop.jar17/12/30 22:02:11 INFO mapreduce.ExportJobBase: Beginning export of testsqoopSLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/software/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/software/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]17/12/30 22:02:11 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar17/12/30 22:02:13 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative17/12/30 22:02:13 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative17/12/30 22:02:13 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps17/12/30 22:02:22 INFO input.FileInputFormat: Total input paths to process : 117/12/30 22:02:22 INFO input.FileInputFormat: Total input paths to process : 117/12/30 22:02:23 INFO mapreduce.JobSubmitter: number of splits:217/12/30 22:02:23 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative17/12/30 22:02:24 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1514638990227_000117/12/30 22:02:25 INFO impl.YarnClientImpl: Submitted application application_1514638990227_000117/12/30 22:02:25 INFO mapreduce.Job: The url to track the job: http://master01:8088/proxy/application_1514638990227_0001/17/12/30 22:02:25 INFO mapreduce.Job: Running job: job_1514638990227_000117/12/30 22:03:13 INFO mapreduce.Job: Job job_1514638990227_0001 running in uber mode : false17/12/30 22:03:13 INFO mapreduce.Job:  map 0% reduce 0%17/12/30 22:03:58 INFO mapreduce.Job:  map 100% reduce 0%17/12/30 22:03:59 INFO mapreduce.Job: Job job_1514638990227_0001 completed successfully17/12/30 22:03:59 INFO mapreduce.Job: Counters: 30    File System Counters        FILE: Number of bytes read=0        FILE: Number of bytes written=277282        FILE: Number of read operations=0        FILE: Number of large read operations=0        FILE: Number of write operations=0        HDFS: Number of bytes read=484        HDFS: Number of bytes written=0        HDFS: Number of read operations=8        HDFS: Number of large read operations=0        HDFS: Number of write operations=0    Job Counters         Launched map tasks=2        Data-local map tasks=2        Total time spent by all maps in occupied slots (ms)=79918        Total time spent by all reduces in occupied slots (ms)=0        Total time spent by all map tasks (ms)=79918        Total vcore-milliseconds taken by all map tasks=79918        Total megabyte-milliseconds taken by all map tasks=81836032    Map-Reduce Framework        Map input records=10        Map output records=10        Input split bytes=286        Spilled Records=0        Failed Shuffles=0        Merged Map outputs=0        GC time elapsed (ms)=386        CPU time spent (ms)=4950        Physical memory (bytes) snapshot=216600576        Virtual memory (bytes) snapshot=1697566720        Total committed heap usage (bytes)=32874496    File Input Format Counters         Bytes Read=0    File Output Format Counters         Bytes Written=017/12/30 22:03:59 INFO mapreduce.ExportJobBase: Transferred 484 bytes in 105.965 seconds (4.5675 bytes/sec)17/12/30 22:03:59 INFO mapreduce.ExportJobBase: Exported 10 records.
运行截图

小结:从运行过程可以看出只有Map任务,没有Reduce任务。

4、在mysql数据库再次查询结果

mysql> select * from testsqoop;+------+---------------+------+| uid  | uname         | age  |+------+---------------+------+|    1 | ligang        |    2 ||    2 | chenghua      |    3 ||    3 | liqin         |    1 ||    4 | zhanghua      |    4 ||    5 | wanghua       |    1 ||    6 | liulinjin     |    5 ||    7 | wangxiaochuan |    6 ||    8 | guchuan       |    2 ||    9 | xiaoyong      |    4 ||   10 | huping        |    6 |+------+---------------+------+10 rows in set (0.00 sec)

从结果可以证明数据导出到mysql数据库成功。

 

 目标二、将mysql的数据导入到hdfs集群中

 1、删除hive中mmzs数据库的testsqoop表的数据

 

 确认真的删除了:

2、将mysql中的数据导入到hdfs群

A、指定部分查询数据导入到集群众

[hadoop@CloudDeskTop software]$ sqoop-import --append --connect 'jdbc:mysql://192.168.154.134:3306/mmzs' --username 'root' --password '123456' --query 'select * from mmzs.testsqoop where uid>3 and $CONDITIONS' -m 1 --target-dir '/user/hive/warehouse/mmzs.db/testsqoop' --fields-terminated-by '\t' --lines-terminated-by '\n'
[hadoop@CloudDeskTop software]$ sqoop-import --append --connect 'jdbc:mysql://192.168.154.134:3306/mmzs' --username 'root' --password '123456' --query 'select * from mmzs.testsqoop where uid>3 and $CONDITIONS' -m 1 --target-dir '/user/hive/warehouse/mmzs.db/testsqoop' --fields-terminated-by '\t' --lines-terminated-by '\n'17/12/30 22:40:54 INFO sqoop.Sqoop: Running Sqoop version: 1.4.617/12/30 22:40:54 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.17/12/30 22:40:55 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.17/12/30 22:40:55 INFO tool.CodeGenTool: Beginning code generation17/12/30 22:40:55 INFO manager.SqlManager: Executing SQL statement: select * from mmzs.testsqoop where uid>3 and  (1 = 0) 17/12/30 22:40:55 INFO manager.SqlManager: Executing SQL statement: select * from mmzs.testsqoop where uid>3 and  (1 = 0) 17/12/30 22:40:55 INFO manager.SqlManager: Executing SQL statement: select * from mmzs.testsqoop where uid>3 and  (1 = 0) 17/12/30 22:40:55 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /software/hadoop-2.7.3注: /tmp/sqoop-hadoop/compile/cd00e059648175875074eed7f4189e0b/QueryResult.java使用或覆盖了已过时的 API。注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。17/12/30 22:40:58 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/cd00e059648175875074eed7f4189e0b/QueryResult.jar17/12/30 22:40:58 INFO mapreduce.ImportJobBase: Beginning query import.SLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/software/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/software/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]17/12/30 22:40:59 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar17/12/30 22:41:01 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps17/12/30 22:41:08 INFO db.DBInputFormat: Using read commited transaction isolation17/12/30 22:41:09 INFO mapreduce.JobSubmitter: number of splits:117/12/30 22:41:09 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1514638990227_000317/12/30 22:41:10 INFO impl.YarnClientImpl: Submitted application application_1514638990227_000317/12/30 22:41:10 INFO mapreduce.Job: The url to track the job: http://master01:8088/proxy/application_1514638990227_0003/17/12/30 22:41:10 INFO mapreduce.Job: Running job: job_1514638990227_000317/12/30 22:41:54 INFO mapreduce.Job: Job job_1514638990227_0003 running in uber mode : false17/12/30 22:41:54 INFO mapreduce.Job:  map 0% reduce 0%17/12/30 22:42:29 INFO mapreduce.Job:  map 100% reduce 0%17/12/30 22:42:31 INFO mapreduce.Job: Job job_1514638990227_0003 completed successfully17/12/30 22:42:32 INFO mapreduce.Job: Counters: 30    File System Counters        FILE: Number of bytes read=0        FILE: Number of bytes written=138692        FILE: Number of read operations=0        FILE: Number of large read operations=0        FILE: Number of write operations=0        HDFS: Number of bytes read=87        HDFS: Number of bytes written=94        HDFS: Number of read operations=4        HDFS: Number of large read operations=0        HDFS: Number of write operations=2    Job Counters         Launched map tasks=1        Other local map tasks=1        Total time spent by all maps in occupied slots (ms)=32275        Total time spent by all reduces in occupied slots (ms)=0        Total time spent by all map tasks (ms)=32275        Total vcore-milliseconds taken by all map tasks=32275        Total megabyte-milliseconds taken by all map tasks=33049600    Map-Reduce Framework        Map input records=7        Map output records=7        Input split bytes=87        Spilled Records=0        Failed Shuffles=0        Merged Map outputs=0        GC time elapsed (ms)=170        CPU time spent (ms)=2020        Physical memory (bytes) snapshot=109428736        Virtual memory (bytes) snapshot=851021824        Total committed heap usage (bytes)=19091456    File Input Format Counters         Bytes Read=0    File Output Format Counters         Bytes Written=9417/12/30 22:42:32 INFO mapreduce.ImportJobBase: Transferred 94 bytes in 91.0632 seconds (1.0322 bytes/sec)17/12/30 22:42:32 INFO mapreduce.ImportJobBase: Retrieved 7 records.17/12/30 22:42:32 INFO util.AppendUtils: Appending to directory testsqoop
View Code

在集群中查询是否真的导入了数据:

 

在hive数据库中中查询是否真的导入了数据:

从结果可以证明数据导入到hdfs集群成功。

删除集群数据,方便下次导入操作:

[hadoop@master01 software]$ hdfs dfs -rm -r /user/hive/warehouse/mmzs.db/testsqoop/part-m-00000

B、指定一张表,整个表的数据一起导入到集群中

sqoop-import --append --connect 'jdbc:mysql://192.168.154.134:3306/mmzs' --username 'root' --password '123456' --table testsqoop -m 1 --target-dir '/user/hive/warehouse/mmzs.db/testsqoop/' --fields-terminated-by '\t' --lines-terminated-by '\n'
[hadoop@CloudDeskTop software]$ sqoop-import --append --connect 'jdbc:mysql://192.168.154.134:3306/mmzs' --username 'root' --password '123456' --table testsqoop -m 1 --target-dir '/user/hive/warehouse/mmzs.db/testsqoop/' --fields-terminated-by '\t' --lines-terminated-by '\n'17/12/30 22:28:31 INFO sqoop.Sqoop: Running Sqoop version: 1.4.617/12/30 22:28:31 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.17/12/30 22:28:32 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.17/12/30 22:28:32 INFO tool.CodeGenTool: Beginning code generation17/12/30 22:28:33 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `testsqoop` AS t LIMIT 117/12/30 22:28:33 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `testsqoop` AS t LIMIT 117/12/30 22:28:33 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /software/hadoop-2.7.3注: /tmp/sqoop-hadoop/compile/d427f3a0d1a3328c5dc9ae1bd6cbd988/testsqoop.java使用或覆盖了已过时的 API。注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。17/12/30 22:28:36 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/d427f3a0d1a3328c5dc9ae1bd6cbd988/testsqoop.jar17/12/30 22:28:36 WARN manager.MySQLManager: It looks like you are importing from mysql.17/12/30 22:28:36 WARN manager.MySQLManager: This transfer can be faster! Use the --direct17/12/30 22:28:36 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.17/12/30 22:28:36 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)17/12/30 22:28:36 INFO mapreduce.ImportJobBase: Beginning import of testsqoopSLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/software/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/software/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]17/12/30 22:28:36 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar17/12/30 22:28:38 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps17/12/30 22:28:45 INFO db.DBInputFormat: Using read commited transaction isolation17/12/30 22:28:45 INFO mapreduce.JobSubmitter: number of splits:117/12/30 22:28:46 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1514638990227_000217/12/30 22:28:46 INFO impl.YarnClientImpl: Submitted application application_1514638990227_000217/12/30 22:28:47 INFO mapreduce.Job: The url to track the job: http://master01:8088/proxy/application_1514638990227_0002/17/12/30 22:28:47 INFO mapreduce.Job: Running job: job_1514638990227_000217/12/30 22:29:29 INFO mapreduce.Job: Job job_1514638990227_0002 running in uber mode : false17/12/30 22:29:29 INFO mapreduce.Job:  map 0% reduce 0%17/12/30 22:30:06 INFO mapreduce.Job:  map 100% reduce 0%17/12/30 22:30:07 INFO mapreduce.Job: Job job_1514638990227_0002 completed successfully17/12/30 22:30:08 INFO mapreduce.Job: Counters: 30    File System Counters        FILE: Number of bytes read=0        FILE: Number of bytes written=138842        FILE: Number of read operations=0        FILE: Number of large read operations=0        FILE: Number of write operations=0        HDFS: Number of bytes read=87        HDFS: Number of bytes written=128        HDFS: Number of read operations=4        HDFS: Number of large read operations=0        HDFS: Number of write operations=2    Job Counters         Launched map tasks=1        Other local map tasks=1        Total time spent by all maps in occupied slots (ms)=33630        Total time spent by all reduces in occupied slots (ms)=0        Total time spent by all map tasks (ms)=33630        Total vcore-milliseconds taken by all map tasks=33630        Total megabyte-milliseconds taken by all map tasks=34437120    Map-Reduce Framework        Map input records=10        Map output records=10        Input split bytes=87        Spilled Records=0        Failed Shuffles=0        Merged Map outputs=0        GC time elapsed (ms)=177        CPU time spent (ms)=2490        Physical memory (bytes) snapshot=109060096        Virtual memory (bytes) snapshot=850882560        Total committed heap usage (bytes)=18972672    File Input Format Counters         Bytes Read=0    File Output Format Counters         Bytes Written=12817/12/30 22:30:08 INFO mapreduce.ImportJobBase: Transferred 128 bytes in 89.4828 seconds (1.4304 bytes/sec)17/12/30 22:30:08 INFO mapreduce.ImportJobBase: Retrieved 10 records.17/12/30 22:30:08 INFO util.AppendUtils: Appending to directory testsqoop
运行结果

在集群中查询是否真的导入了数据:

在hive数据库中中查询是否真的导入了数据:

 

 从结果可以证明数据导入到hdfs集群成功。

 

转载于:https://www.cnblogs.com/mmzs/p/8149921.html

你可能感兴趣的文章
JavaScript中的变量
查看>>
iptables基本原理和规则配置
查看>>
ArcGIS JS 学习笔记4 实现地图联动
查看>>
ubuntu 12.04 lts安装golang并设置vim语法高亮
查看>>
编程题目:PAT 1004. 成绩排名 (20)
查看>>
使用分层实现业务处理
查看>>
Microsoft Windows平台的NoSQL数据存储引擎
查看>>
浅谈虚拟机
查看>>
Ubuntu系统Linux编译osg库
查看>>
BootstrapTable-导出数据
查看>>
Linux学习笔记 -- 系统目录结构
查看>>
[转载]ExtJs4 笔记(9) Ext.Panel 面板控件、 Ext.window.Window 窗口控件、 Ext.container.Viewport 布局控件...
查看>>
将数组排序组成最小的整数
查看>>
sqlserver学习--1(登陆,时间函数,查看表结构,查看建表语句,IDENTITY() 函数,查询表名称,查询表结构)...
查看>>
MYSQL 日期函数
查看>>
Oracle触发器之替代触发器
查看>>
NodeJS基础教程之一
查看>>
你真的了解SDWebImage吗?
查看>>
BZOJ 1101 Luogu P3455 POI 2007 Zap (莫比乌斯反演+数论分块)
查看>>
C#嵌套类
查看>>