Commit e9172ce9 authored by Pablo Aznar's avatar Pablo Aznar
Browse files

Readme updated

parent e214097b
# SecureGrid
Deep Learning based Attack Detection System for Smart Grids
\ No newline at end of file
Deep Learning based Attack Detection System for Smart Grids
## Usage
In order to detect attacks, the power consumption values of the houses are analyzed. For that reason, first, the needed DataFrames to feed the neural network (autoencoder) have to be created.
For this purpose, the notebook dataframe_creation is used. This notebook generates .pkl files that contain the DataFrames with the necessary data. In addition, these DataFrames contain the following features:
| Feature | Description |
| ------------- | ------------- |
| Day | Current day of the first window value |
| Hour | Current hour of the first window value |
| Minute | Current minute of the first window value |
| Pn | Power consumption window values |
| Mean | Mean of the window values |
| Mean_i - Mean_i-1 | Difference between the mean of the window values and the mean of the previous window values |
| s | Standard deviation of the window values |
| Pn - P1 | Difference between the last and first value of the window |
| Q1 | First quartile of the window values |
| Q2 | Median of the window values |
| Q3 | Third quartile of the window values |
| IQR | Interquartile range of the window values |
Once the DataFrames are created, they are used to feed the autoencoder. Therefore, the conv1d_autoencoder.py file has to be configured.
The normal_data_path variable has to contain the path to the .pkl file that contains data without attacks, that is to say, a normal behaviour of the houses. In addition, the attack_data_path variable has to contain the path to the .pkl file that contains the data that is wanted to be analyzed in order to detect attacks.
Furthermore, in order to train the autoencoder, the DO_TRAINING variable has to be set to True.
Finally, the following command executes the system:
$ python conv1d_autoencoder.py
## Results
Once the system is executed, it generates the predicted_labels.csv file that contains the labels that classify every entry of the DataFrame into attack (1) or normal behaviour (0).
......@@ -18,9 +18,11 @@ DO_TRAINING = False
def main():
data_dir_path = './data'
model_dir_path = './models'
normal_data_path = '/normal/houses_concatenated.pkl'
attack_data_path = '/anomaly_20/labels_df/house_0.pkl'
#Read Normal Data (No Attacks)
houses = pd.read_pickle(data_dir_path + "/normal/houses_concatenated.pkl")
houses = pd.read_pickle(data_dir_path + normal_data_path)
print(houses.head())
houses = houses.drop("attacked", axis=1)
......@@ -58,14 +60,14 @@ def main():
final_fs = []
df_house_normal = pd.read_pickle(data_dir_path + '/normal/houses_concatenated.pkl')
df_house_normal = pd.read_pickle(data_dir_path + normal_data_path)
labels_normal = df_house_normal["attacked"]
df_house_normal = df_house_normal.drop("attacked", axis=1)
scaled_house_normal = scaler.transform(df_house_normal)
# Read Attack Data
df_house_anomaly = pd.read_pickle(data_dir_path + '/anomaly_20/labels_df/house_0.pkl')
df_house_anomaly = pd.read_pickle(data_dir_path + attack_data_path)
labels_anomaly = df_house_anomaly["attacked"]
df_house_anomaly = df_house_anomaly.drop("attacked", axis=1)
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment