Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TensorFlow 'NoneType' object is not subscriptable during fit() #41

Open
vrunm opened this issue Jul 29, 2023 · 4 comments
Open

TensorFlow 'NoneType' object is not subscriptable during fit() #41

vrunm opened this issue Jul 29, 2023 · 4 comments
Assignees

Comments

@vrunm
Copy link

vrunm commented Jul 29, 2023

I am building a TensorFlow model that takes 4 inputs and gives 2 outputs.
I first start with a pd.DataFrame:


train_targets = train_features[["content", "wording"]]
train_features = train_features[["text", "prompt_question", "prompt_title", "prompt_text"]]

Then, I use a generator to create the TensorFlow Dataset:

def generator():
    for text, prompt_question, prompt_title, prompt_text, content, wording in zip(train_features["text"], train_features["prompt_question"], train_features["prompt_title"], train_features["prompt_text"], train_targets["content"], train_targets["wording"]):
      yield {"text": text_vectorization(text), "prompt_question": text_vectorization(prompt_question), "prompt_title": text_vectorization(prompt_title), "prompt_text": text_vectorization(prompt_text)}, {"content": content, "wording": wording}

train_ds = tf.data.Dataset.from_generator(generator, output_types=({"text": tf.int64, "prompt_question": tf.int64, "prompt_title": tf.int64, "prompt_text": tf.int64}, {"content": tf.float32, "wording": tf.float32}))

Here is what one "row" of train_ds looks like:
({"text": [1, 0, 0, ..., 1, 1], "prompt_question": [1, 0, 0, ..., 1, 1], "prompt_title": [1, 0, 0, ..., 1, 1], "prompt_text": [1, 0, 0, ..., 1, 1]}, {"content": 2, "wording": 1}))
Every value is a tensor.


Type of text is <class 'tensorflow.python.framework.ops.EagerTensor'>
Type of prompt_question is <class 'tensorflow.python.framework.ops.EagerTensor'>
Type of prompt_title is <class 'tensorflow.python.framework.ops.EagerTensor'>
Type of prompt_text is <class 'tensorflow.python.framework.ops.EagerTensor'>
Type of content is <class 'tensorflow.python.framework.ops.EagerTensor'>
Type of wording is <class 'tensorflow.python.framework.ops.EagerTensor'>

Note that I use a TextVectorization layer from keras:


max_tokens = 20000
max_length = 600

text_vectorization = keras.layers.TextVectorization(
 max_tokens=max_tokens,
 output_mode="int",
 output_sequence_length=max_length,
)

text_vectorization.adapt(a list of all my texts)

At this point, train_ds contains no None values.

Here is my model:

def generate_model():
    input_names = train_features.columns.tolist()

    inputs = []

    for name in input_names:
        inputs.append(keras.layers.Input(shape=(None,), dtype="int64", name=name))

    concatenate = keras.layers.concatenate(inputs)
    
    embedded = keras.layers.Embedding(input_dim=max_tokens, output_dim=256, mask_zero=True)(concatenate)
    
    x = keras.layers.Bidirectional(keras.layers.LSTM(32))(embedded)
    x = keras.layers.Dropout(0.5)(x)

    output_content = keras.layers.Dense(1, activation="linear", name="content")(x)
    output_wording = keras.layers.Dense(1, activation="linear", name="wording")(x)

    model = keras.models.Model(inputs=inputs, outputs=[output_content, output_wording])

    return model

I compile it like this:
model.compile(loss={"content": "mean_squared_error", "wording": "mean_squared_error"}, optimizer="adam")

The error occurs when I try to fit the model:
history = model.fit(x=train_ds, batch_size=32, epochs=20, callbacks=[callbacks])
Here is the traceback:

TypeError: Exception encountered when calling layer 'forward_lstm_5' (type LSTM).
    
    'NoneType' object is not subscriptable
    
    Call arguments received by layer 'forward_lstm_5' (type LSTM):
      • inputs=tf.Tensor(shape=, dtype=float32)
      • mask=tf.Tensor(shape=, dtype=bool)
      • training=True
      • initial_state=None

It appears that the error rises because of the LSTM layer:
x = keras.layers.Bidirectional(keras.layers.LSTM(32))(embedded)

@tilakrayal
Copy link
Collaborator

tilakrayal commented Jul 31, 2023

@vrunm,
With the code provided above, we weren't able to analyse and provide the pointer to resolve the issue. Kindly find the gist of it here. So, could you please provide the complete code, dependencies and the tensorflow version you are trying. Thank you!

@vrunm
Copy link
Author

vrunm commented Aug 2, 2023

The code you have provided is correct. Since you don't have access to the original data the code does not run. I cannot provide the dataset details. Is the error due to the definition of the LSTM layer? I have looked extensively and could not find any other solutions to this error?
I found a similar issue Link were the similar problem has been encountered but this issue has not fully been resolved yet.

@Matt-Bailey-EcoSync
Copy link

I have the same issue, and have created a MWE here:
https://colab.research.google.com/gist/Matt-Bailey-EcoSync/9d39319c92633448dc1e623d8ce91dcf/keras-nonetype-issue.ipynb

I've found that iterating over the dataset and fitting on batches manually works:

m1 = tf.keras.Model(encoder.inputs, decoder_outputs)
m1.add_loss(vae_loss2(encoder_inputs, decoder_outputs, z_log_sigma, z_mean)) #<===========
m1.compile(loss=None, optimizer='adam')

for feature, target in mwe_ds:
    m1.fit(feature, target)

but fitting on the dataset directly does not work:
m1.fit(mwe_ds) # TypeError: 'NoneType' object is not subscriptable

However, running eagerly works (with a performance penalty):

m2 = tf.keras.Model(encoder.inputs, decoder_outputs)
m2.add_loss(vae_loss2(encoder_inputs, decoder_outputs, z_log_sigma, z_mean))
m2.compile(loss=None, optimizer='adam', run_eagerly=True)
m2.fit(mwe_ds)

@sachinprasadhs sachinprasadhs transferred this issue from keras-team/keras Sep 22, 2023
@plasmatech8
Copy link

Is there any progress on this?

Do RNNs work on other platforms, or not at all?

I am trying to fit a simple model but I am getting this issue on every RNN layer.

Env:

  • Python: 3.10.12
  • tensorflow==2.15.0
  • MacOS Sonoma 14.2.1, intel

Reading in the time-dimension of a spectrogram with shape: (time=None, frequencies=257, channels=1)

# Initial layers
model = Sequential()
model.add(Input(shape=(None, 257, 1)))

# Drop channels dimension
model.add(Lambda(lambda x: x[:, :, :, 0]))

# Add an RNN layer (LSTM) to process along the time dimension
model.add(LSTM(5))

# Final output layer
model.add(Dense(1, activation='sigmoid'))

# ...

model.fit(train, epochs=1, validation_data=test)

Error:

TypeError: in user code:

    File "/**/keras/src/engine/training.py", line 1401, in train_function  *
        return step_function(self, iterator)
    File "/**/keras/src/engine/training.py", line 1384, in step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "/**/keras/src/engine/training.py", line 1373, in run_step  **
        outputs = model.train_step(data)
    File "/**/keras/src/engine/training.py", line 1150, in train_step
        y_pred = self(x, training=True)
    File "/**/keras/src/utils/traceback_utils.py", line 70, in error_handler
        raise e.with_traceback(filtered_tb) from None
    File "/**/keras/src/layers/rnn/lstm.py", line 616, in call
        timesteps = input_shape[0] if self.time_major else input_shape[1]

    TypeError: Exception encountered when calling layer 'lstm' (type LSTM).
    
    'NoneType' object is not subscriptable
    
    Call arguments received by layer 'lstm' (type LSTM):
      • inputs=tf.Tensor(shape=<unknown>, dtype=float32)
      • mask=None
      • training=True
      • initial_state=None

Only works when run_eagerly=True is set. Though, I presume that it makes the model train slower :(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants