Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Data Science by (18.4k points)

To the data frame df:

        Player   Team  Points  Mean   Price   Value

Gameweek                                                                 

1       Jim  Leeds     4.4   4.40  10.44         0.44

2       Jim  Leeds     8.9   6.65  12.97         2.53

3       Jim  Leeds    -1.8   3.83  10.70        -2.27

I need to add a new row at the index 0 and fill it with some dummy values, plus the open price. For that I am trying:

df.loc[-1] = [df['Player'].item(), 

              df['Team'].item(),

              0.0, 

              0.0, 

             (df['Price'].item() - df['Value'].item()),

              0.0]  

df.index = df.index +1  # shifting index

df = df.sort_index()  # sorting by index then resseting

In order to end up with:

        Player   Team  Points  Mean   Price   Value

Gameweek  

0       Jim  Leeds     0.0   0.0   10.00         0.00                                           

1       Jim  Leeds     4.4   4.40  10.44         0.44

2       Jim  Leeds     8.9   6.65  12.97         2.53

3       Jim  Leeds    -1.8   3.83  10.70        -2.27

But I'm getting:

df.loc[-1] = [df['Player'].item(), 

return self.values.item()

ValueError: can only convert an array of size 1 to a Python scalar

What am I missing?

1 Answer

0 votes
by (36.8k points)

Change your code to

df.loc[0] = [df['Player'].iloc[0], 

                  df['Team'].iloc[0],

                  0.0, 

                  0.0, 

                 (df['Price'].iloc[0] - df['Value'].iloc[0]),

                  0.0] 

df = df.sort_index()

If you are a beginner and want to know more about Python the do check out the Python for Data Science 

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...