아하
검색 이미지
생활꿀팁 이미지
생활꿀팁생활
생활꿀팁 이미지
생활꿀팁생활
의젓한사슴127
의젓한사슴12721.06.02

파이썬으로 간단히 코인 가격 예측하는 프로그램을 만들고 싶습니다

fbprophet 으로 시간, 종가만 불러와서 완전 초보적으로 계산하는 것은 할 수 있습니다. 그런데 너무 그래프가 광범위하게만 나오는 것이 제가 생각한 머신러닝 수준이 아니었습니다.

파이썬으로 머신러닝을 할 수 있는 라이브러리나 함수 같은 것 대략적으로 나열해주시거나 설명해주시면 감사하겠습니다.

55글자 더 채워주세요.
답변의 개수
1개의 답변이 있어요!
  • 아래 사이트 링크로 가보시면 동영상으로도 설명되어있으니 아마도 도움이 될겁니다...^^;

    출처 : https://opentutorials.org/module/3811/22947

    주식. 비트코인 시세 예측

    딥러닝(LSTM)을 사용하여 주식 가격과 암호화폐의 시세를 예측하는 인공지능을 만들거예요.
    놀라지마세요. 생각보다 훨씬 정확하답니다.

    Source code(Github): https://github.com/kairess/stock_crypto_price_prediction

    Dependencies:
    - Python
    - numpy
    - Keras
    - pandas
    - matplotlib

    Dataset
    - Yahoo Finance: https://finance.yahoo.com
    - CoinMarketCap: https://coinmarketcap.com

    ------------------------------------------------------------------------------------------------------------------

    import pandas as pd import numpy as np import matplotlib.pyplot as plt from keras.models import Sequential from keras.layers import LSTM, Dropout, Dense, Activation from keras.callbacks import TensorBoard, ModelCheckpoint, ReduceLROnPlateau import datetime data = pd.read_csv('dataset/eth.csv') data.head()

    high_prices = data['High'].values low_prices = data['Low'].values mid_prices = (high_prices + low_prices) / 2 seq_len = 50 sequence_length = seq_len + 1 result = [] for index in range(len(mid_prices) - sequence_length): result.append(mid_prices[index: index + sequence_length])

    def normalize_windows(data): normalized_data = [] for window in data: normalized_window = [((float(p) / float(window[0])) - 1) for p in window] normalized_data.append(normalized_window) return np.array(normalized_data) result = normalize_windows(result) # split train and test data row = int(round(result.shape[0] * 0.9)) train = result[:row, :] np.random.shuffle(train) x_train = train[:, :-1] x_train = np.reshape(x_train, (x_train.shape[0], x_train.shape[1], 1)) y_train = train[:, -1] x_test = result[row:, :-1] x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1)) y_test = result[row:, -1] x_train.shape, x_test.shape

    ((283, 50, 1), (31, 50, 1))

    model = Sequential() model.add(LSTM(50, return_sequences=True, input_shape=(50, 1))) model.add(LSTM(64, return_sequences=False)) model.add(Dense(1, activation='linear')) model.compile(loss='mse', optimizer='rmsprop') model.summary()

    start_time = datetime.datetime.now().strftime('%Y_%m_%d_%H_%M_%S') model.fit(x_train, y_train, validation_data=(x_test, y_test), batch_size=10, epochs=20, callbacks=[ TensorBoard(log_dir='logs/%s' % (start_time)), ModelCheckpoint('./models/%s_eth.h5' % (start_time), monitor='val_loss', verbose=1, save_best_only=True, mode='auto'), ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, verbose=1, mode='auto') ])

    Train on 283 samples, validate on 31 samples Epoch 1/20 283/283 [==============================] - 3s 12ms/step - loss: 0.0984 - val_loss: 0.0010 Epoch 00001: val_loss improved from inf to 0.00102, saving model to ./models/2018_10_31_21_41_26_eth.h5 Epoch 2/20 283/283 [==============================] - 3s 10ms/step - loss: 0.0466 - val_loss: 0.0012 Epoch 00002: val_loss did not improve from 0.00102 Epoch 3/20 283/283 [==============================] - 4s 14ms/step - loss: 0.0377 - val_loss: 0.0019 Epoch 00003: val_loss did not improve from 0.00102 Epoch 4/20 283/283 [==============================] - 5s 17ms/step - loss: 0.0396 - val_loss: 7.9362e-04 Epoch 00004: val_loss improved from 0.00102 to 0.00079, saving model to ./models/2018_10_31_21_41_26_eth.h5 Epoch 5/20 283/283 [==============================] - 5s 18ms/step - loss: 0.0279 - val_loss: 0.0038 Epoch 00005: val_loss did not improve from 0.00079 Epoch 6/20 283/283 [==============================] - 5s 18ms/step - loss: 0.0227 - val_loss: 0.0012 Epoch 00006: val_loss did not improve from 0.00079 Epoch 7/20 283/283 [==============================] - 5s 19ms/step - loss: 0.0214 - val_loss: 0.0021 Epoch 00007: val_loss did not improve from 0.00079 Epoch 8/20 283/283 [==============================] - 5s 16ms/step - loss: 0.0205 - val_loss: 7.2855e-04 Epoch 00008: val_loss improved from 0.00079 to 0.00073, saving model to ./models/2018_10_31_21_41_26_eth.h5 Epoch 9/20 283/283 [==============================] - 3s 10ms/step - loss: 0.0191 - val_loss: 8.8453e-04 Epoch 00009: val_loss did not improve from 0.00073 Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.00020000000949949026. Epoch 10/20 283/283 [==============================] - 3s 9ms/step - loss: 0.0141 - val_loss: 0.0011 Epoch 00010: val_loss did not improve from 0.00073 Epoch 11/20 283/283 [==============================] - 2s 9ms/step - loss: 0.0136 - val_loss: 0.0011 Epoch 00011: val_loss did not improve from 0.00073 Epoch 12/20 283/283 [==============================] - 3s 9ms/step - loss: 0.0129 - val_loss: 0.0011 Epoch 00012: val_loss did not improve from 0.00073 Epoch 13/20 283/283 [==============================] - 3s 9ms/step - loss: 0.0128 - val_loss: 8.5286e-04 Epoch 00013: val_loss did not improve from 0.00073 Epoch 14/20 283/283 [==============================] - 3s 9ms/step - loss: 0.0127 - val_loss: 7.9398e-04 Epoch 00014: val_loss did not improve from 0.00073 Epoch 00014: ReduceLROnPlateau reducing learning rate to 4.0000001899898055e-05. Epoch 15/20 283/283 [==============================] - 3s 9ms/step - loss: 0.0126 - val_loss: 9.4050e-04 Epoch 00015: val_loss did not improve from 0.00073 Epoch 16/20 283/283 [==============================] - 3s 10ms/step - loss: 0.0122 - val_loss: 9.6500e-04 Epoch 00016: val_loss did not improve from 0.00073 Epoch 17/20 283/283 [==============================] - 3s 10ms/step - loss: 0.0121 - val_loss: 9.9169e-04 Epoch 00017: val_loss did not improve from 0.00073 Epoch 18/20 283/283 [==============================] - 3s 10ms/step - loss: 0.0122 - val_loss: 9.9065e-04 Epoch 00018: val_loss did not improve from 0.00073 Epoch 19/20 283/283 [==============================] - 3s 10ms/step - loss: 0.0122 - val_loss: 0.0010 Epoch 00019: val_loss did not improve from 0.00073 Epoch 00019: ReduceLROnPlateau reducing learning rate to 8.000000525498762e-06. Epoch 20/20 283/283 [==============================] - 3s 10ms/step - loss: 0.0119 - val_loss: 0.0010 Epoch 00020: val_loss did not improve from 0.00073

    <keras.callbacks.History at 0x1328cdba8>

    pred = model.predict(x_test) fig = plt.figure(facecolor='white', figsize=(20, 10)) ax = fig.add_subplot(111) ax.plot(y_test, label='True') ax.plot(pred, label='Prediction') ax.legend() plt.show()