# Solution for Lidar Car Detection

A detailed solution for submission 155659 submitted for challenge Lidar Car Detection

## Starter Code for Lidar Car Detection ### What we are going to Learn¶

• Learning about how lidar works
• Using scikit-learn for binary classification.

Note : Create a copy of the notebook and use the copy for submission. Go to File > Save a Copy in Drive to create a new copy

Installing aicrowd-cli

In :
!pip install aicrowd-cli

The aicrowd.magic extension is already loaded. To reload it, use:

In :
%aicrowd login

Please login here: https://api.aicrowd.com/auth/Zkw_WU9yTqnxlwOHSqwg-_S0o-toFa8zqy2jTo_x1b4
API Key valid
Saved API Key successfully!

In :
!rm -rf data
!mkdir data
%aicrowd ds dl -c lidar-car-detection -o data


# Importing Libraries¶

In :
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
import os
import matplotlib.pyplot as plt
import plotly.graph_objects as go
import random


In :
# Reading the training dataset

train_data = train_data['train']

train_data.shape

Out:
(400, 2)

### Visualizing the dataset¶

In this section, we will be visualizing a sample 3D lidar data

In [ ]:
# Getting a random 3D lidar sample data
INDEX = random.randint(0, train_data.shape)

# Getting the individual x,y and z points.
x = train_data[INDEX][:, 0].tolist()
y = train_data[INDEX][:, 1].tolist()
z = train_data[INDEX][:, 2].tolist()

# Label for the corrosponding sample ( no. of cars )
label  = train_data[INDEX]

# Generating the 3D graph
fig = go.Figure(data=[go.Scatter3d(x=x, y=y, z=z,
mode='markers',
marker=dict(
size=1,
colorscale='Viridis',
opacity=0.8))])
print("No. of cars : ", label)
fig.show()


Can you try finding cars in this 3d data ?

## Splitting the dataset¶

In :
# Getting the 3d points and flattening the points into 1d array ( using only 100 training samples for faster teaining )
X = train_data[:, 0]
X = [i.flatten() for i in X]

# labels
y = train_data[:, 1]

In :
# Splitting the dataset into training and testing
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2)


# Training the model¶

In :
model = RandomForestRegressor(verbose=True, n_jobs=-1)

In :
model.fit(X_train, y_train)

[Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 2 concurrent workers.
[Parallel(n_jobs=-1)]: Done  46 tasks      | elapsed:  4.9min
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed: 10.7min finished

Out:
RandomForestRegressor(bootstrap=True, ccp_alpha=0.0, criterion='mse',
max_depth=None, max_features='auto', max_leaf_nodes=None,
max_samples=None, min_impurity_decrease=0.0,
min_impurity_split=None, min_samples_leaf=1,
min_samples_split=2, min_weight_fraction_leaf=0.0,
n_estimators=100, n_jobs=-1, oob_score=False,
random_state=None, verbose=True, warm_start=False)

# Validation¶

In :
model.score(X_val, y_val)

[Parallel(n_jobs=2)]: Using backend ThreadingBackend with 2 concurrent workers.
[Parallel(n_jobs=2)]: Done  46 tasks      | elapsed:    0.0s
[Parallel(n_jobs=2)]: Done 100 out of 100 | elapsed:    0.0s finished

Out:
0.3173802260900771

# Generating the predictions¶

In :
# Loading the test data

test_data = test_data['test']

test_data.shape

Out:
(601,)
In :
# flattening the points into 1d array
X_test = X = [i.flatten() for i in test_data]

In :
# Generating the predictions
predictions = model.predict(X_test)
predictions.shape

[Parallel(n_jobs=2)]: Using backend ThreadingBackend with 2 concurrent workers.
[Parallel(n_jobs=2)]: Done  46 tasks      | elapsed:    0.0s
[Parallel(n_jobs=2)]: Done 100 out of 100 | elapsed:    0.0s finished

Out:
(601,)
In :
submission = pd.DataFrame({"label":predictions})
submission

Out:
label
0 2.70
1 2.40
2 2.68
3 3.26
4 2.34
... ...
596 2.57
597 2.01
598 2.10
599 2.85
600 2.62

601 rows × 1 columns

In :
# Saving the predictions
!rm -rf assets
!mkdir assets
submission.to_csv(os.path.join("assets", "submission.csv"))


# Submitting our Predictions¶

Note : Please save the notebook before submitting it (Ctrl + S)

In [ ]:


/usr/local/lib/python3.7/dist-packages/aicrowd/notebook/helpers.py:361: UserWarning: %aicrowd magic command can be used to save the notebook inside jupyter notebook/jupyterLab environment and also to get the notebook directly from the frontend without mounting the drive in colab environment. You can use magic command to skip mounting the drive and submit using the code below:
%aicrowd notebook submit -c lidar-car-detection -a assets --no-verify
warnings.warn(description + code)  