apache-spark-Spark SQL中的DataFrame.select()和DataFrame.toDF()有什么区别

似乎他们都返回了一个新的DataFrame

源代码:

def toDF(self, *cols):
    jdf = self._jdf.toDF(self._jseq(cols))
    return DataFrame(jdf, self.sql_ctx)


def select(self, *cols):
    jdf = self._jdf.select(self._jcols(*cols))
    return DataFrame(jdf, self.sql_ctx)
最佳答案
区别是微妙的.

例如,如果使用.toDF(“ name”,“ age”)将未命名的元组(“ Pete”,22)转换为DataFrame,则还可以通过再次调用toDF方法来重命名该数据帧.例如:

scala> val rdd = sc.parallelize(List(("Piter", 22), ("Gurbe", 27)))
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[2] at parallelize at <console>:27

scala> val df = rdd.toDF("name", "age")
df: org.apache.spark.sql.DataFrame = [name: string, age: int]

scala> df.show()
+-----+---+
| name|age|
+-----+---+
|Piter| 22|
|Gurbe| 27|
+-----+---+

scala> val df = rdd.toDF("person", "age")
df: org.apache.spark.sql.DataFrame = [person: string, age: int]

scala> df.show()
+------+---+
|person|age|
+------+---+
| Piter| 22|
| Gurbe| 27|
+------+---+

使用选择,您可以选择列,这些列以后可用于投影表或仅保存所需的列:

scala> df.select("age").show()
+---+
|age|
+---+
| 22|
| 27|
+---+

scala> df.select("age").write.save("/tmp/ages.parquet")
Scaling row group sizes to 88.37% for 8 writers.

希望这可以帮助!

转载注明原文:apache-spark-Spark SQL中的DataFrame.select()和DataFrame.toDF()有什么区别 - 代码日志