Commit d08e8c8b authored by Paul Bethge's avatar Paul Bethge
Browse files

init WS code

parent 419e389f
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "JmvzTcYice-_"
},
"source": [
"# Copyright 2020 The TensorFlow Authors."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "zlvAS8a9cD_t"
},
"outputs": [],
"source": [
"#@title Licensed under the Apache License, Version 2.0 (the \"License\"); \n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "b2VYQpTttmVN"
},
"source": [
"# TensorFlow Fairness Indicators Example Using CelebA Dataset"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3iFsS2WSeRwe"
},
"source": [
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n",
" <td>\n",
" <a target=\"_blank\" href=\"https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />Original Code on TensorFlow.org</a>\n",
" </td>\n",
" <td>\n",
" <a target=\"_blank\" href=\"https://colab.research.google.com/github/zkmkarlsruhe/bias-workshop/blob/main/Fairness_Indicators_CelebA_Case_Study.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n",
" </td>\n",
" <td>\n",
" <a target=\"_blank\" href=\"https://github.com/zkmkarlsruhe/bias-workshop//blob/main/Fairness_Indicators_CelebA_Case_Study.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View on GitHub</a>\n",
" </td>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-DQoReGDeN16"
},
"source": [
"This notebook explores bias in images using Google’s [Fairness Indicators](https://www.tensorflow.org/responsible_ai/fairness_indicators/guide). In particular, this notebook will:\n",
"\n",
"* Train a simple neural network model to detect a person's smile in images using [`tf.keras`](https://www.tensorflow.org/guide/keras) and the large-scale CelebFaces Attributes ([CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html)) dataset.\n",
"* Evaluate model performance against a commonly used fairness metric across age groups, using Fairness Indicators.\n",
"* Let's you try out the model by taking a selfie.\n"
]
},
{
"source": [
"# Acknowledgement \n",
"We hereby gratefully thank TensorFlow for providing tools and examples to explore the topic of bias in Machine Learning (ML) applications.\n",
"\n",
"We have introduced small changes to the provided notebook in order to suit our needs for this workshop. Please visit the [website](https://www.tensorflow.org/responsible_ai/) to learn more about how TensorFlow gives opportunities to make AI more responsible (including detecting and mitigating bias). Please find the orignal notebook [here](https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study).\n",
"\n",
"Changes to the original code have been developed at [ZKM | Hertz-Lab](https://zkm.de/en/about-the-zkm/organization/hertz-lab) as part of the project [»The Intelligent Museum«](https://zkm.de/en/project/the-intelligent-museum), which is generously funded by the Digital Culture Programme of the [Kulturstiftung des Bundes](https://www.kulturstiftung-des-bundes.de/en) (German Federal Cultural Foundation). Please find other codes developed as part of this project at [intelligent.museum/code](https://intelligent.museum/code)\n",
"\n",
"\n",
"\n"
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "markdown",
"metadata": {
"id": "JyCbEWt5Zxe2"
},
"source": [
"# Installation and Import\n",
"This notebook was created in [Colaboratory](https://research.google.com/colaboratory/faq.html), connected to the Python 3 Google Compute Engine backend. \n",
"\n",
"We will start by downloading the necessary python packages to get the required data (tensorflow-datasets), train a neural network (tensorflow) and evalute (fairness-indicators / tensorflow-model-analysis). Afterwards, we import specific modules from those librabries.\n",
"\n",
"__Important:__ the very first time you run the pip installs, you may be asked to restart the runtime because of preinstalled out of date packages. Once you do so, the correct packages will be used."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "T-Zm-KDdt0bn"
},
"outputs": [],
"source": [
"#@title Pip installs\n",
"!pip install -q -U pip==20.2\n",
"\n",
"!pip install -q tensorflow-datasets tensorflow\n",
"!pip install fairness-indicators \\\n",
" \"absl-py==0.12.0\" \\\n",
" \"apache-beam<3,>=2.28\" \\\n",
" \"avro-python3==1.9.1\" \\\n",
" \"pyzmq==17.0.0\"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "UTBBdSGaZ8aW"
},
"outputs": [],
"source": [
"#@title Import Modules { display-mode: \"form\" }\n",
"import os\n",
"import sys\n",
"import tempfile\n",
"import urllib\n",
"\n",
"import tensorflow as tf\n",
"from tensorflow import keras\n",
"\n",
"import tensorflow_datasets as tfds\n",
"tfds.disable_progress_bar()\n",
"\n",
"import numpy as np\n",
"\n",
"from tensorflow_metadata.proto.v0 import schema_pb2\n",
"from tfx_bsl.tfxio import tensor_adapter\n",
"from tfx_bsl.tfxio import tf_example_record\n",
"\n",
"import tensorflow_model_analysis as tfma\n",
"import fairness_indicators as fi\n",
"from google.protobuf import text_format\n",
"import apache_beam as beam\n",
"\n",
"#Enable Eager Execution and Print Versions\n",
"if tf.__version__ < \"2.0.0\":\n",
" tf.compat.v1.enable_eager_execution()\n",
" print(\"Eager execution enabled.\")\n",
"else:\n",
" print(\"Eager execution enabled by default.\")\n",
"\n",
"print(\"TensorFlow \" + tf.__version__)\n",
"print(\"TFMA \" + tfma.VERSION_STRING)\n",
"print(\"TFDS \" + tfds.version.__version__)\n",
"print(\"FI \" + fi.version.__version__)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "idY3Uuk3yvty"
},
"source": [
"# CelebA Dataset\n",
"[CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) is a large-scale face attributes dataset with more than 200,000 celebrity images, each with 40 attribute annotations (such as hair type, fashion accessories, facial features, etc.) and 5 landmark locations (eyes, mouth and nose positions). For more details take a look at [the paper](https://liuziwei7.github.io/projects/FaceAttributes.html).\n",
"With the permission of the owners, we have stored this dataset on Google Cloud Storage and mostly access it via [TensorFlow Datasets(`tfds`)](https://www.tensorflow.org/datasets).\n",
"\n",
"In this notebook:\n",
"* Our model will attempt to classify whether the subject of the image is smiling, as represented by the \"Smiling\" attribute<sup>*</sup>.\n",
"* Images will be resized from 218x178 to 64x64 to reduce the execution time and memory when training.\n",
"* Our model's performance will be evaluated across age groups, using the binary \"Young\" attribute. We will call this \"age group\" in this notebook.\n",
"\n",
"___\n",
"\n",
"<sup>*</sup> While there is little information available about the labeling methodology for this dataset, we will assume that the \"Smiling\" attribute was determined by a pleased, kind, or amused expression on the subject's face. For the purpose of this case study, we will take these labels as ground truth.\n",
"\n",
"\n"
]
},
{
"source": [
"gcs_base_dir = \"gs://celeb_a_dataset/\"\n",
"celeb_a_builder = tfds.builder(\"celeb_a\", data_dir=gcs_base_dir, version='2.0.0')\n",
"\n",
"celeb_a_builder.download_and_prepare()\n",
"\n",
"num_test_shards_dict = {'0.3.0': 4, '2.0.0': 2} # Used because we download the test dataset separately\n",
"version = str(celeb_a_builder.info.version)\n",
"print('Celeb_A dataset version: %s' % version)\n",
"\n",
"local_root = tempfile.mkdtemp(prefix='test-data')\n",
"def local_test_filename_base():\n",
" return local_root\n",
"\n",
"def local_test_file_full_prefix():\n",
" return os.path.join(local_test_filename_base(), \"celeb_a-test.tfrecord\")\n",
"\n",
"def copy_test_files_to_local():\n",
" filename_base = local_test_file_full_prefix()\n",
" num_test_shards = num_test_shards_dict[version]\n",
" for shard in range(num_test_shards):\n",
" url = \"https://storage.googleapis.com/celeb_a_dataset/celeb_a/%s/celeb_a-test.tfrecord-0000%s-of-0000%s\" % (version, shard, num_test_shards)\n",
" filename = \"%s-0000%s-of-0000%s\" % (filename_base, shard, num_test_shards)\n",
" res = urllib.request.urlretrieve(url, filename)"
],
"cell_type": "code",
"metadata": {
"id": "zCSemFST0b89"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "u5PDLXZb_uIj"
},
"source": [
"## Caveats\n",
"Before moving forward, there are several considerations to keep in mind in using CelebA:\n",
"* Although in principle this notebook could use any dataset of face images, CelebA was chosen because it contains public domain images of public figures.\n",
"* All of the attribute annotations in CelebA are operationalized as binary categories. For example, the \"Young\" attribute (as determined by the dataset labelers) is denoted as either present or absent in the image.\n",
"* CelebA's categorizations do not reflect real human diversity of attributes.\n",
"* For the purposes of this notebook, the feature containing the \"Young\" attribute is referred to as \"age group\", where the presence of the \"Young\" attribute in an image is labeled as a member of the \"Young\" age group and the absence of the \"Young\" attribute is labeled as a member of the \"Not Young\" age group. These are assumptions made as this information is not mentioned in the [original paper](http://openaccess.thecvf.com/content_iccv_2015/html/Liu_Deep_Learning_Face_ICCV_2015_paper.html).\n",
"* As such, performance in the models trained in this notebook is tied to the ways the attributes have been operationalized and annotated by the authors of CelebA.\n",
"* This model should not be used for commercial purposes as that would violate [CelebA's non-commercial research agreement](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Elkiu92cY2bY"
},
"source": [
"# Defining the Challenge\n",
"\n",
"In this code block will set hyperparameters that highly define the problem we trying to solve. \n",
"The value of `IMAGE_SIZE` determines the width and height of the image we are feeding into the neural network. The smaller this number the faster, but also more imprecise our classification algorithm gets.\n",
"\n",
"`LABEL_KEY` determines the attribute we are training our classifier on (e.g. does a person have a mustache, is smiling, wears a hat or not?),\n",
"while `GROUP_KEY` defines the groups we are evalutaing on (e.g. male, young, chubby). Keep in mind that does are only binary attributes - the abscence of the male attribute probably denotes the female one. \n",
"\n",
"You can find the 40 different attributes in [this table](https://www.researchgate.net/figure/List-of-the-40-face-attributes-provided-with-the-CelebA-database_tbl1_327029519)\n",
"\n",
"__Note:__ after completing this exercise, feel free to play around with those variables"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "gDdarTZxk6y4"
},
"outputs": [],
"source": [
"\n",
"IMAGE_SIZE = 64\n",
"LABEL_KEY = \"Smiling\"\n",
"GROUP_KEY = \"Young\""
]
},
{
"source": [
"# Setting Up Input Functions\n",
"The subsequent cells will help streamline the input pipeline as well as visualize performance."
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "SD-H70Je0cTp"
},
"outputs": [],
"source": [
"#@title Define Preprocessing and Dataset Functions { display-mode: \"form\" }\n",
"\n",
"ATTR_KEY = \"attributes\"\n",
"IMAGE_KEY = \"image\"\n",
"\n",
"def preprocess_input_dict(feat_dict):\n",
" # Separate out the image and target variable from the feature dictionary.\n",
" image = feat_dict[IMAGE_KEY]\n",
" label = feat_dict[ATTR_KEY][LABEL_KEY]\n",
" group = feat_dict[ATTR_KEY][GROUP_KEY]\n",
"\n",
" # Resize and normalize image.\n",
" image = tf.cast(image, tf.float32)\n",
" image = tf.image.resize(image, [IMAGE_SIZE, IMAGE_SIZE])\n",
" image /= 255.0\n",
"\n",
" # Cast label and group to float32.\n",
" label = tf.cast(label, tf.float32)\n",
" group = tf.cast(group, tf.float32)\n",
"\n",
" feat_dict[IMAGE_KEY] = image\n",
" feat_dict[ATTR_KEY][LABEL_KEY] = label\n",
" feat_dict[ATTR_KEY][GROUP_KEY] = group\n",
"\n",
" return feat_dict\n",
"\n",
"get_image_and_label = lambda feat_dict: (feat_dict[IMAGE_KEY], feat_dict[ATTR_KEY][LABEL_KEY])\n",
"get_image_label_and_group = lambda feat_dict: (feat_dict[IMAGE_KEY], feat_dict[ATTR_KEY][LABEL_KEY], feat_dict[ATTR_KEY][GROUP_KEY])\n",
"\n",
"\n",
"# Train data returning either 2 or 3 elements (the third element being the group)\n",
"def celeb_a_train_data_wo_group(batch_size):\n",
" celeb_a_train_data = celeb_a_builder.as_dataset(split='train').shuffle(1024).repeat().batch(batch_size).map(preprocess_input_dict)\n",
" return celeb_a_train_data.map(get_image_and_label)\n",
"def celeb_a_train_data_w_group(batch_size):\n",
" celeb_a_train_data = celeb_a_builder.as_dataset(split='train').shuffle(1024).repeat().batch(batch_size).map(preprocess_input_dict)\n",
" return celeb_a_train_data.map(get_image_label_and_group)\n",
"\n",
"# Test data for the overall evaluation\n",
"celeb_a_test_data = celeb_a_builder.as_dataset(split='test').batch(1).map(preprocess_input_dict).map(get_image_label_and_group)\n",
"# Copy test data locally to be able to read it into tfma\n",
"copy_test_files_to_local()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "NXO3woTxiCk0"
},
"source": [
"# Build a simple CNN model\n",
"\n",
"In this next block of code we define an Artificial Neural Network with several different layers. Those layers include convolutional filers, pooling and fully-conected layers. We may be able to greatly improve model performance by adding some more complexity (e.g., more densely-connected layers, exploring different activation functions, increasing image size, different acrichtectures, regularization methods, ...), but that may distract from the goal of demonstrating how bias manifests itself in ML models. For that reason, the model will be kept simple — but feel encouraged to explore this space."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "RNZhN_zU8DRD"
},
"outputs": [],
"source": [
"from tensorflow.keras import layers\n",
"\n",
"def create_model():\n",
" # For this notebook, accuracy will be used to evaluate performance.\n",
" METRICS = [\n",
" tf.keras.metrics.BinaryAccuracy(name='accuracy')\n",
" ]\n",
"\n",
" model = keras.Sequential([\n",
" layers.InputLayer(input_shape=(IMAGE_SIZE, IMAGE_SIZE, 3), name='image'),\n",
" layers.Conv2D(16, 3, padding='same', activation='relu'),\n",
" layers.MaxPooling2D(),\n",
" layers.Conv2D(32, 3, padding='same', activation='relu'),\n",
" layers.MaxPooling2D(),\n",
" layers.Flatten(),\n",
" layers.Dense(64, activation='relu'),\n",
" layers.Dense(1, activation=None)\n",
" ])\n",
" \n",
" model.compile(\n",
" optimizer=tf.keras.optimizers.Adam(0.001),\n",
" loss='hinge',\n",
" metrics=METRICS)\n",
" return model"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7A4uKPNVzPVO"
},
"source": [
"We also define a function to set seeds to ensure reproducible results. Note that this colab is meant as an educational tool and does not have the stability of a finely tuned production pipeline. Running without setting a seed may lead to varied results. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "-IVw4EgKzqSF"
},
"outputs": [],
"source": [
"def set_seeds():\n",
" np.random.seed(121212)\n",
" tf.compat.v1.set_random_seed(212121)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Xrbjmmeom8pA"
},
"source": [
"# Fairness Indicators Helper Functions\n",
"Before training our model, we define a number of helper functions that will allow us to evaluate the model's performance via Fairness Indicators. \n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1EPF_k620CRN"
},
"source": [
"First, we create a helper function to save our model once we train it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ejHbhLW5epar"
},
"outputs": [],
"source": [
"#@title Save Model function { display-mode: \"form\" }\n",
"def save_model(model, subdir):\n",
" base_dir = tempfile.mkdtemp(prefix='saved_models')\n",
" model_location = os.path.join(base_dir, subdir)\n",
" model.save(model_location, save_format='tf')\n",
" return model_location"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "erhKEvqByCNj"
},
"source": [
"Next, we define functions used to preprocess the data in order to correctly pass it through to TFMA."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "D2qa8Okwj_U3"
},
"outputs": [],
"source": [
"#@title Data Preprocessing functions { display-mode: \"form\" }\n",
"def tfds_filepattern_for_split(dataset_name, split):\n",
" return f\"{local_test_file_full_prefix()}*\"\n",
"\n",
"class PreprocessCelebA(object):\n",
" \"\"\"Class that deserializes, decodes and applies additional preprocessing for CelebA input.\"\"\"\n",
" def __init__(self, dataset_name):\n",
" builder = tfds.builder(dataset_name)\n",
" self.features = builder.info.features\n",
" example_specs = self.features.get_serialized_info()\n",
" self.parser = tfds.core.example_parser.ExampleParser(example_specs)\n",
"\n",
" def __call__(self, serialized_example):\n",
" # Deserialize\n",
" deserialized_example = self.parser.parse_example(serialized_example)\n",
" # Decode\n",
" decoded_example = self.features.decode_example(deserialized_example)\n",
" # Additional preprocessing\n",
" image = decoded_example[IMAGE_KEY]\n",
" label = decoded_example[ATTR_KEY][LABEL_KEY]\n",
" # Resize and scale image.\n",
" image = tf.cast(image, tf.float32)\n",
" image = tf.image.resize(image, [IMAGE_SIZE, IMAGE_SIZE])\n",
" image /= 255.0\n",
" image = tf.reshape(image, [-1])\n",
" # Cast label and group to float32.\n",
" label = tf.cast(label, tf.float32)\n",
"\n",
" group = decoded_example[ATTR_KEY][GROUP_KEY]\n",
" \n",
" output = tf.train.Example()\n",
" output.features.feature[IMAGE_KEY].float_list.value.extend(image.numpy().tolist())\n",
" output.features.feature[LABEL_KEY].float_list.value.append(label.numpy())\n",
" output.features.feature[GROUP_KEY].bytes_list.value.append(GROUP_KEY if group.numpy() else 'Not ' + GROUP_KEY)\n",
" return output.SerializeToString()\n",
"\n",
"def tfds_as_pcollection(beam_pipeline, dataset_name, split):\n",
" return (\n",
" beam_pipeline\n",
" | 'Read records' >> beam.io.ReadFromTFRecord(tfds_filepattern_for_split(dataset_name, split))\n",
" | 'Preprocess' >> beam.Map(PreprocessCelebA(dataset_name))\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "fBKvxd2Tz3hK"
},
"source": [
"Finally, we define a function that evaluates the results in TFMA."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "30YduitftaNB"
},
"outputs": [],
"source": [
"#@title TFMA Evaluation function { display-mode: \"form\" }\n",
"def get_eval_results(model_location, eval_subdir):\n",
" base_dir = tempfile.mkdtemp(prefix='saved_eval_results')\n",
" tfma_eval_result_path = os.path.join(base_dir, eval_subdir)\n",
"\n",
" eval_config_pbtxt = \"\"\"\n",
" model_specs {\n",
" label_key: \"%s\"\n",
" }\n",
" metrics_specs {\n",
" metrics {\n",
" class_name: \"FairnessIndicators\"\n",
" config: '{ \"thresholds\": [0.5] }'\n",
" }\n",
" metrics {\n",
" class_name: \"ExampleCount\"\n",
" }\n",
" }\n",
" slicing_specs {}\n",
" slicing_specs { feature_keys: \"%s\" }\n",
" options {\n",
" compute_confidence_intervals { value: False }\n",
" disabled_outputs{values: \"analysis\"}\n",
" }\n",
" \"\"\" % (LABEL_KEY, GROUP_KEY)\n",
" \n",
" eval_config = text_format.Parse(eval_config_pbtxt, tfma.EvalConfig())\n",
"\n",
" eval_shared_model = tfma.default_eval_shared_model(\n",
" eval_saved_model_path=model_location, tags=[tf.saved_model.SERVING])\n",
"\n",
" schema_pbtxt = \"\"\"\n",
" tensor_representation_group {\n",
" key: \"\"\n",
" value {\n",
" tensor_representation {\n",
" key: \"%s\"\n",
" value {\n",
" dense_tensor {\n",
" column_name: \"%s\"\n",
" shape {\n",
" dim { size: %s }\n",
" dim { size: %s }\n",
" dim { size: 3 }\n",
" }\n",
" }\n",
" }\n",
" }\n",
" }\n",
" }\n",
" feature {\n",
" name: \"%s\"\n",
" type: FLOAT\n",
" }\n",
" feature {\n",
" name: \"%s\"\n",
" type: FLOAT\n",
" }\n",
" feature {\n",
" name: \"%s\"\n",
" type: BYTES\n",
" }\n",
" \"\"\" % (IMAGE_KEY, IMAGE_KEY, IMAGE_SIZE, IMAGE_SIZE, IMAGE_KEY, LABEL_KEY, GROUP_KEY)\n",
" schema = text_format.Parse(schema_pbtxt, schema_pb2.Schema())\n",
" coder = tf_example_record.TFExampleBeamRecord(\n",
" physical_format='inmem', schema=schema,\n",
" raw_record_column_name=tfma.ARROW_INPUT_COLUMN)\n",
" tensor_adapter_config = tensor_adapter.TensorAdapterConfig(\n",
" arrow_schema=coder.ArrowSchema(),\n",
" tensor_representations=coder.TensorRepresentations())\n",
" # Run the fairness evaluation.\n",
" with beam.Pipeline() as pipeline:\n",
" _ = (\n",
" tfds_as_pcollection(pipeline, 'celeb_a', 'test')\n",
" | 'ExamplesToRecordBatch' >> coder.BeamSource()\n",
" | 'ExtractEvaluateAndWriteResults' >>\n",
" tfma.ExtractEvaluateAndWriteResults(\n",
" eval_config=eval_config,\n",
" eval_shared_model=eval_shared_model,\n",
" output_path=tfma_eval_result_path,\n",
" tensor_adapter_config=tensor_adapter_config)\n",
" )\n",
" return tfma.load_eval_result(output_path=tfma_eval_result_path)\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "76tZ3vk-tyo9"
},
"source": [
"# Train & Evaluate Model\n",
"\n",
"With the model now defined and the input pipeline in place, we’re now ready to train our model. To cut back on the amount of execution time and memory, we will train the model by slicing the data into small batches with only a few repeated iterations.\n",