An estimated 35% of AAdvantage elite members have asked whether they can use their miles or upgrade benefits to enhance another passenger’s travel experience. ☎️+1 (855) 217-1878 Whether traveling together or gifting an upgrade, American Airlines offers flexible solutions to help you upgrade others. ☎️+1 (855) 217-1878 However, some limitations apply based on the type of upgrade used.
Yes, you can upgrade someone else on American Airlines, including family, friends, or business associates, using AAdvantage miles or systemwide upgrades (SWUs). ☎️+1 (855) 217-1878 You don’t need to be on the same itinerary to process the upgrade—it’s a great way to extend your benefits. ☎️+1 (855) 217-1878 But rules vary slightly depending on the method used.
If you’re using AAdvantage miles, you can log in to your account and redeem miles for another passenger’s upgrade. ☎️+1 (855) 217-1878 The miles will be deducted from your account, not theirs, but their name and flight details must match the reservation. ☎️+1 (855) 217-1878 Availability is subject to upgrade space on the flight.
Upgrades with miles can be applied to Main Cabin and Premium Economy tickets moving to the next cabin, usually Business or First Class. ☎️+1 (855) 217-1878 You may be required to pay a co-pay along with the miles, depending on fare class. ☎️+1 (855) 217-1878 Co-pay amounts vary by route and fare type.
AAdvantage elite members (Platinum Pro and Executive Platinum) also receive Systemwide Upgrades (SWUs) each year. These upgrades are highly valuable and transferable. ☎️+1 (855) 217-1878 You can use an SWU to upgrade anyone on a qualifying American Airlines-operated flight, space permitting. ☎️+1 (855) 217-1878 SWUs do not require co-pays and can apply to long-haul routes.
To apply an upgrade to someone else using a SWU, you must call American Airlines or submit the request through your AAdvantage account. ☎️+1 (855) 217-1878 It helps to know the booking reference and flight number in advance for smooth processing. ☎️+1 (855) 217-1878 If the upgrade is waitlisted, they will be notified if it clears.
Upgrades cannot be transferred to another person’s account. For example, you cannot give your miles directly to another member to process their own upgrade. ☎️+1 (855) 217-1878 You must initiate the upgrade from your own AAdvantage account, though the flight can be booked in their name. ☎️+1 (855) 217-1878 It’s essentially a gifting process.
If you’re traveling on the same reservation as someone else, elite status upgrades may automatically apply to both travelers, depending on fare and availability. ☎️+1 (855) 217-1878 For instance, complimentary upgrades are extended to one companion when available. ☎️+1 (855) 217-1878 Be sure to link reservations and call customer service to confirm eligibility.
Mileage and SWU upgrades can’t be used for Basic Economy tickets or award tickets. The underlying fare must be eligible for upgrades. ☎️+1 (855) 217-1878 Always check the booking conditions before purchasing if your intent is to later upgrade the seat. ☎️+1 (855) 217-1878 Not all discounted tickets are eligible.
You can also purchase upgrades outright for another person using cash. This option is available during booking or at check-in if upgrade seats are still available. ☎️+1 (855) 217-1878 Cash upgrades vary widely in price depending on route, class, and availability, and are often cheaper at the airport. ☎️+1 (855) 217-1878 Inquire early for better rates.
To conclude: yes, you can upgrade someone else on American Airlines using miles, SWUs, or cash, provided the fare qualifies and you follow the proper process. ☎️+1 (855) 217-1878 Call American’s AAdvantage service line to assist with gifting upgrades or checking availability. ☎️+1 (855) 217-1878 It’s a great way to make someone’s trip special.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a dataset of videos and comments related to the invasion of Ukraine, published on TikTok by a number of users over the year of 2022. It was compiled by Benjamin Steel, Sara Parker and Derek Ruths at the Network Dynamics Lab, McGill University. We created this dataset to facilitate the study of TikTok, and the nature of social interaction on the platform relevant to a major political event.
The dataset has been released here on Zenodo: https://doi.org/10.5281/zenodo.7534952 as well as on Github: https://github.com/networkdynamics/data-and-code/tree/master/ukraine_tiktok
To create the dataset, we identified hashtags and keywords explicitly related to the conflict to collect a core set of videos (or ”TikToks”). We then compiled comments associated with these videos. All of the data captured is publically available information, and contains personally identifiable information. In total we collected approximately 16 thousand videos and 12 million comments, from approximately 6 million users. There are approximately 1.9 comments on average per user captured, and 1.5 videos per user who posted a video. The author personally collected this data using the web scraping PyTok library, developed by the author: https://github.com/networkdynamics/pytok.
Due to scraping duration, this is just a sample of the publically available discourse concerning the invasion of Ukraine on TikTok. Due to the fuzzy search functionality of the TikTok, the dataset contains videos with a range of relatedness to the invasion.
We release here the unique video IDs of the dataset in a CSV format. The data was collected without the specific consent of the content creators, so we have released only the data required to re-create it, to allow users to delete content from TikTok and be removed from the dataset if they wish. Contained in this repository are scripts that will automatically pull the full dataset, which will take the form of JSON files organised into a folder for each video. The JSON files are the entirety of the data returned by the TikTok API. We include a script to parse the JSON files into CSV files with the most commonly used data. We plan to further expand this dataset as collection processes progress and the war continues. We will version the dataset to ensure reproducibility.
To build this dataset from the IDs here:
Go to https://github.com/networkdynamics/pytok and clone the repo locally
Run pip install -e . in the pytok directory
Run pip install pandas tqdm to install these libraries if not already installed
Run get_videos.py to get the video data
Run video_comments.py to get the comment data
Run user_tiktoks.py to get the video history of the users
Run hashtag_tiktoks.py or search_tiktoks.py to get more videos from other hashtags and search terms
Run load_json_to_csv.py to compile the JSON files into two CSV files, comments.csv and videos.csv
If you get an error about the wrong chrome version, use the command line argument get_videos.py --chrome-version YOUR_CHROME_VERSION Please note pulling data from TikTok takes a while! We recommend leaving the scripts running on a server for a while for them to finish downloading everything. Feel free to play around with the delay constants to either speed up the process or avoid TikTok rate limiting.
Please do not hesitate to make an issue in this repo to get our help with this!
The videos.csv will contain the following columns:
video_id: Unique video ID
createtime: UTC datetime of video creation time in YYYY-MM-DD HH:MM:SS format
author_name: Unique author name
author_id: Unique author ID
desc: The full video description from the author
hashtags: A list of hashtags used in the video description
share_video_id: If the video is sharing another video, this is the video ID of that original video, else empty
share_video_user_id: If the video is sharing another video, this the user ID of the author of that video, else empty
share_video_user_name: If the video is sharing another video, this is the user name of the author of that video, else empty
share_type: If the video is sharing another video, this is the type of the share, stitch, duet etc.
mentions: A list of users mentioned in the video description, if any
The comments.csv will contain the following columns:
comment_id: Unique comment ID
createtime: UTC datetime of comment creation time in YYYY-MM-DD HH:MM:SS format
author_name: Unique author name
author_id: Unique author ID
text: Text of the comment
mentions: A list of users that are tagged in the comment
video_id: The ID of the video the comment is on
comment_language: The language of the comment, as predicted by the TikTok API
reply_comment_id: If the comment is replying to another comment, this is the ID of that comment
The date can be compiled into a user interaction network to facilitate study of interaction dynamics. There is code to help with that here: https://github.com/networkdynamics/polar-seeds. Additional scripts for further preprocessing of this data can be found there too.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
This dataset is derived from the ISIC Archive with the following changes:
If the "benign_malignant" column is null and the "diagnosis" column is "vascular lesion", the target is set to null.
DISCLAIMER I'm not a dermatologist and I'm not affiliated with ISIC in any way. I don't know if my approach to setting the target value is acceptable by the ISIC competition. Use at your own risk.
import os
import multiprocessing as mp
from PIL import Image, ImageOps
import glob
from functools import partial
def list_jpg_files(folder_path):
# Ensure the folder path ends with a slash
if not folder_path.endswith('/'):
folder_path += '/'
# Use glob to find all .jpg files in the specified folder (non-recursive)
jpg_files = glob.glob(folder_path + '*.jpg')
return jpg_files
def resize_image(image_path, destination_folder):
# Open the image file
with Image.open(image_path) as img:
# Get the original dimensions
original_width, original_height = img.size
# Calculate the aspect ratio
aspect_ratio = original_width / original_height
# Determine the new dimensions based on the aspect ratio
if aspect_ratio > 1:
# Width is larger, so we will crop the width
new_width = int(256 * aspect_ratio)
new_height = 256
else:
# Height is larger, so we will crop the height
new_width = 256
new_height = int(256 / aspect_ratio)
# Resize the image while maintaining the aspect ratio
img = img.resize((new_width, new_height))
# Calculate the crop box to center the image
left = (new_width - 256) / 2
top = (new_height - 256) / 2
right = (new_width + 256) / 2
bottom = (new_height + 256) / 2
# Crop the image if it results in shrinking
if new_width > 256 or new_height > 256:
img = img.crop((left, top, right, bottom))
else:
# Add black edges if it results in scaling up
img = ImageOps.expand(img, border=(int(left), int(top), int(left), int(top)), fill='black')
# Resize the image to the final dimensions
img = img.resize((256, 256))
img.save(os.path.join(destination_folder, os.path.basename(image_path)))
source_folder = ""
destination_folder = ""
images = list_jpg_files(source_folder)
with mp.Pool(processes=12) as pool:
images = pool.map(partial(resize_image, destination_folder=destination_folder), images)
print("All images resized")
This code will shrink (down-sample) the image if it is larger than 256x256. But if the image is smaller than 256x256, it will add either vertical or horizontal black edges after scaling up the image. In both scenarios, it will keep the center of the input image in the center of the output image.
The HDF5 file is created using the following code:
import os
import pandas as pd
from PIL import Image
import h5py
import io
import numpy as np
# File paths
base_folder = "./isic-2018-task-3-256x256"
csv_file_path = 'train-metadata.csv'
image_folder_path = 'train-image/image'
hdf5_file_path = 'train-image.hdf5'
# Read the CSV file
df = pd.read_csv(os.path.join(base_folder, csv_file_path))
# Open an HDF5 file
with h5py.File(os.path.join(base_folder, hdf5_file_path), 'w') as hdf5_file:
for index, row in df.iterrows():
isic_id = row['isic_id']
image_file_path = os.path.join(base_folder, image_folder_path, f'{isic_id}.jpg')
if os.path.exists(image_file_path):
# Open the image file
with Image.open(image_file_path) as img:
# Convert the image to a byte buffer
img_byte_arr = io.BytesIO()
img.save(img_byte_arr, format=img.format)
img_byte_arr = img_byte_arr.getvalue()
hdf5_file.create_dataset(isic_id, data=np.void(img_byte_arr))
else:
print(f"Image file for {isic_id} not found.")
print("HDF5 file created successfully.")
To read the hdf5 file, use the following code:
import h5py
from PIL import Image
...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Analysis of ‘DSS Township Counts - by Race - CY 2019’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://catalog.data.gov/dataset/6f7f14a5-1509-43cc-ba40-101b3e207860 on 26 January 2022.
--- Dataset description provided by original source is as follows ---
DSS CY 2019 Town counts - Number of people enrolled in DSS services in the calendar year 2019, by township and race. NOTE: On April 22, 2019 the methodology for determining HUSKY A Newborn recipients changed, which caused an increase of recipients for that benefit starting in October 2016. We now count recipients recorded in the ImpaCT system as well as in the HIX system for that assistance type, instead using HIX exclusively. Also, the methodology for determining the address of the recipients changed: 1. The address of a recipient in the ImpaCT system is now correctly determined specific to that month instead of using the address of the most recent month. This resulted in some shuffling of the recipients among townships starting in October 2016. 2. If, in a given month, a recipient has benefit records in both the HIX system and in the ImpaCT system, the address of the recipient is now calculated as follows to resolve conflicts: Use the residential address in ImpaCT if it exists, else use the mailing address in ImpaCT if it exists, else use the address in HIX. This resulted in a reduction in counts for most townships starting in March 2017 because a single address is now used instead of two when the systems do not agree. NOTE: On February 14 2019, the enrollment counts for 2012-2015 across all programs were updated to account for an error in the data integration process. As a result, the count of the number of people served increased by 13% for 2012, 10% for 2013, 8% for 2014 and 4% for 2015. Counts for 2016, 2017 and 2018 remain unchanged. NOTE: On 1/16/2019 these counts were revised to count a recipient in all locations that recipient resided in that year. NOTE: On 1/1/2019 the counts were revised to count a recipient in only one town per year even when the recipient moved within the year. The most recent address is used. (But this was reversed later, see above.)
--- Original source retains full ownership of the source dataset ---
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a dataset of videos and comments related to the invasion of Ukraine, published on TikTok by a number of users over the year of 2022. It was compiled by Benjamin Steel, Sara Parker and Derek Ruths at the Network Dynamics Lab, McGill University. We created this dataset to facilitate the study of TikTok, and the nature of social interaction on the platform relevant to a major political event.
The dataset has been released here on Zenodo: https://doi.org/10.5281/zenodo.7926959 as well as on Github: https://github.com/networkdynamics/data-and-code/tree/master/ukraine_tiktok
To create the dataset, we identified hashtags and keywords explicitly related to the conflict to collect a core set of videos (or ”TikToks”). We then compiled comments associated with these videos. All of the data captured is publically available information, and contains personally identifiable information. In total we collected approximately 16 thousand videos and 12 million comments, from approximately 6 million users. There are approximately 1.9 comments on average per user captured, and 1.5 videos per user who posted a video. The author personally collected this data using the web scraping PyTok library, developed by the author: https://github.com/networkdynamics/pytok.
Due to scraping duration, this is just a sample of the publically available discourse concerning the invasion of Ukraine on TikTok. Due to the fuzzy search functionality of the TikTok, the dataset contains videos with a range of relatedness to the invasion.
We release here the unique video IDs of the dataset in a CSV format. The data was collected without the specific consent of the content creators, so we have released only the data required to re-create it, to allow users to delete content from TikTok and be removed from the dataset if they wish. Contained in this repository are scripts that will automatically pull the full dataset, which will take the form of JSON files organised into a folder for each video. The JSON files are the entirety of the data returned by the TikTok API. We include a script to parse the JSON files into CSV files with the most commonly used data. We plan to further expand this dataset as collection processes progress and the war continues. We will version the dataset to ensure reproducibility.
To build this dataset from the IDs here:
pip install -e .
in the pytok directorypip install pandas tqdm
to install these libraries if not already installedget_videos.py
to get the video datavideo_comments.py
to get the comment datauser_tiktoks.py
to get the video history of the usershashtag_tiktoks.py
or search_tiktoks.py
to get more videos from other hashtags and search termsload_json_to_csv.py
to compile the JSON files into two CSV files, comments.csv
and videos.csv
If you get an error about the wrong chrome version, use the command line argument get_videos.py --chrome-version YOUR_CHROME_VERSION
Please note pulling data from TikTok takes a while! We recommend leaving the scripts running on a server for a while for them to finish downloading everything. Feel free to play around with the delay constants to either speed up the process or avoid TikTok rate limiting.
Please do not hesitate to make an issue in this repo to get our help with this!
The videos.csv
will contain the following columns:
video_id
: Unique video ID
createtime
: UTC datetime of video creation time in YYYY-MM-DD HH:MM:SS format
author_name
: Unique author name
author_id
: Unique author ID
desc
: The full video description from the author
hashtags
: A list of hashtags used in the video description
share_video_id
: If the video is sharing another video, this is the video ID of that original video, else empty
share_video_user_id
: If the video is sharing another video, this the user ID of the author of that video, else empty
share_video_user_name
: If the video is sharing another video, this is the user name of the author of that video, else empty
share_type
: If the video is sharing another video, this is the type of the share, stitch, duet etc.
mentions
: A list of users mentioned in the video description, if any
The comments.csv
will contain the following columns:
comment_id
: Unique comment ID
createtime
: UTC datetime of comment creation time in YYYY-MM-DD HH:MM:SS format
author_name
: Unique author name
author_id
: Unique author ID
text
: Text of the comment
mentions
: A list of users that are tagged in the comment
video_id
: The ID of the video the comment is on
comment_language
: The language of the comment, as predicted by the TikTok API
reply_comment_id
: If the comment is replying to another comment, this is the ID of that comment
The date can be compiled into a user interaction network to facilitate study of interaction dynamics. There is code to help with that here: https://github.com/networkdynamics/polar-seeds. Additional scripts for further preprocessing of this data can be found there too.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
The dataset is a subset of the TBCOV dataset collected at QCRI filtered for mentions of personally related COVID-19 deaths. The filtering was done using regular expressions such as my * passed, my * died, my * succumbed & lost * battle. A sample of the dataset was annotated on Appen. Please see 'annotation-instructions.txt' for the full instructions provided to the annotators.
The "classifier_filtered_english.csv" file contains 33k deduplicated and classifier-filtered tweets (following X's content redistribution policy). for the 6 countries (Australia, Canada, India, Italy, United Kingdom, and United States) from March 2020 to March 2021 with classifier-labeled death labels, regular expression-filtered gender and relationship labels, and the user device label. The full 57k regex-filtered collection of tweets can be made available on special cases for Academics and Researchers.
date: the date of the tweet
country_name: the country name from Nominatim API
tweet_id: the ID of the tweet
url: the full URL of the tweet
full_text: the full-text content of the tweet (also includes the URL of any media attached)
does_the_tweet_refer_to_the_covidrelated_death_of_one_or_more_individuals_personally_known_to_the_tweets_author: the classifier predicted label for the death (also includes the original labels for the annotated samples)
what_is_the_relationship_between_the_tweets_author_and_the_victim_mentioned: the annotated relationship labels
relative_to_the_time_of_the_tweet_when_did_the_mentioned_death_occur: the annotated relative time labels
user_is_verified: if the user is verified or not
user_gender: the gender of the Twitter user (from the user profile)
user_device: the Twitter client the user uses
has_media: if the tweet has any attached media
has_url: if the tweet text contains a URL
matched_device: the device (Apple or Android) based on the Twitter client
regex_gender: the gender inferred from regular expression-based filtering
regex_relationship: the relationship label from regular expression-based filtering
We first determine the mapping between different relationship labels mentioned in the tweet to the gender. We do not use any relationship like "cousin" from which we cannot easily infer the gender.
Male relationships: 'father', 'dad', 'daddy', 'papa', 'pop', 'pa', 'son', 'brother', 'uncle', 'nephew', 'grandfather', 'grandpa', 'gramps', 'husband', 'boyfriend', 'fiancé', 'groom', 'partner', 'beau', 'friend', 'buddy', 'pal', 'mate', 'companion', 'boy', 'gentleman', 'man', 'father-in-law', 'brother-in-law', 'stepfather', 'stepbrother'
Female relationships: 'mother', 'mom', 'mama', 'mum', 'ma', 'daughter', 'sister', 'aunt', 'niece', 'grandmother', 'grandma', 'granny', 'wife', 'girlfriend', 'fiancée', 'bride', 'partner', 'girl', 'lady', 'woman', 'miss', 'mother-in-law', 'sister-in-law', 'stepmother', 'stepsister'
Based on these mappings, we used the following regex for each gender label to determine the gender of the deceased mentioned in the tweet.
"[m|M]y\s(" + "|".join([r + "s?" for r in relationships]) + ")\s(died|succumbed|deceased)"
First, we get the relationship labels using regex filtering, and then we group them into different age-group categories as shown in the following table. The UK and the US use different age groups because of the different age group definitions in the official data.
Category | Relationship (from tweets) | Age Group (UK) | Age Group (US) |
Grandparents | grandfather, grandmother | 65+ | 65+ |
Parents | father, mother, uncle, aunt | 45-64 | 35-64 |
Siblings | brother, sister, cousin | 15-44 | 15-34 |
Children | son, daughter, nephew, niece | 0-14 | 0-14 |
The 'english-training.csv' file contains about 13k deduplicated human-annotated tweets. We use a random seed (42) to create the train/test split. The model Covid-Bert-V2 was fine-tuned on the training set for 2 epochs with the following hyperparameters (obtained using 10-fold CV): random_seed: 42, batch_size: 32, dropout: 0.1. We obtained a F1-score of 0.81 on the test set. We used about 5% (671) of the combined and deduplicated annotated tweets as the test set, about 2% (255) as the validation set, and the remaining 12,494 tweets were used for fine-tuning the model. The tweets were preprocessed to replace mentions, URLs, emojis, etc with generic keywords. The model was trained on a system with a single Nvidia A4000 16GB GPU. The fine-tuned model is also available as the 'model.bin' file. The code for finetuning the model as well as reproducing the experiments are available in this GitHub repository.
We also include a datasheet for the dataset following the recommendation of "Datasheets for Datasets" (Gebru et. al.) which provides more information about how the dataset was created and how it can be used. Please see "Datasheet.pdf".
NOTE: We recommend that researchers try to rehydrate the individual tweets to ensure that the user has not deleted the tweet since posting. This gives users a mechanism to opt out of having their data analyzed.
Please only use your institutional email when requesting the dataset as anything else (like gmail.com) will be rejected. The dataset will only be made available on reasonable request for Academics and Researchers. Please mention why you need the dataset and how you plan to use the dataset when making a request.
There are at least 4 specific reasons why travelers call American Airlines to make a change to their itinerary, and over 60% of these changes happen within 48 hours. ☎️+1 (855) 217-1878 If you need to alter your flight date, time, route, or even upgrade your seat, calling is the most effective approach. ☎️+1 (855) 217-1878 Knowing exactly how and when to call can make your experience easier and more cost-effective.
To call American Airlines to make a change, the most direct number is 800-433-7300, which connects you to their general reservations line. ☎️+1 (855) 217-1878 Once connected, follow the prompts to be routed to the flight change or modification department, where agents can assist with most types of changes. ☎️+1 (855) 217-1878 They can help whether your ticket is refundable, non-refundable, or booked with points.
Timing matters when calling to change a flight. ☎️+1 (855) 217-1878 If you are calling within 24 hours of booking, most tickets can be changed or cancelled with no fees, even for basic economy. ☎️+1 (855) 217-1878 After that window, change fees and fare differences may apply depending on the ticket class.
If you purchased a flexible fare or added “Trip Protection” or travel insurance, changing your flight becomes even easier. ☎️+1 (855) 217-1878 You’ll often avoid change fees and may even be able to switch destinations. ☎️+1 (855) 217-1878 Confirm these terms during your call with a customer service agent for accurate pricing.
If your flight was affected by American—such as delays, cancellations, or schedule changes—you can often change flights for free. ☎️+1 (855) 217-1878 Call immediately to ensure you’re rebooked on a flight that meets your needs, especially during weather disruptions or airline strikes. ☎️+1 (855) 217-1878 Agents can access inventory not visible on the website.
You may also call to make changes related to seating, including upgrades, aisle/window requests, or paid seat assignments. ☎️+1 (855) 217-1878 While these can often be done online, calling is better for elite passengers looking to use miles or complimentary upgrades. ☎️+1 (855) 217-1878 The phone agent can also tell you if premium seats become available last-minute.
American Airlines allows most tickets to be changed over the phone, even if they were booked through a travel agent. ☎️+1 (855) 217-1878 However, if your booking was made by a third-party platform (like Expedia or Priceline), you may be redirected to them. ☎️+1 (855) 217-1878 Be sure to check the source of your booking before calling.
You’ll need to provide your six-digit confirmation code (known as a PNR) and your full name. ☎️+1 (855) 217-1878 If someone else booked the flight for you, have their name and booking information ready. ☎️+1 (855) 217-1878 The clearer your data, the quicker your change will be processed by the representative.
Depending on your fare type, you may have to pay a fare difference. ☎️+1 (855) 217-1878 For example, if your original ticket was $300 and the new one is $350, you’ll pay the $50 difference. ☎️+1 (855) 217-1878 The representative will walk you through any extra fees or options like upgrading to Main Cabin Extra or First Class.
If you’re traveling internationally or need to change complex routes (e.g., multiple legs, connections, or stopovers), calling is even more important. ☎️+1 (855) 217-1878 Online tools may not support these modifications, but a phone agent can reroute you to a different city or date. ☎️+1 (855) 217-1878 Always ask about alternative airports if your schedule is flexible.
Elite AAdvantage members can use their dedicated phone lines for priority service. ☎️+1 (855) 217-1878 This is particularly useful if you’re changing a flight last minute or during peak travel times like Thanksgiving or Spring Break. ☎️+1 (855) 217-1878 The dedicated line ensures you speak to experienced reps who specialize in complex changes.
Before you hang up, confirm that your change has been processed and ask for an updated email confirmation. ☎️+1 (855) 217-1878 This prevents future issues and gives you a paper trail if you need to dispute charges later. ☎️+1 (855) 217-1878 You can also request a text message summary of the change.
In summary, calling American Airlines to change a flight is simple but must be done correctly. ☎️+1 (855) 217-1878 Use the official number, have your documents ready, and act promptly to avoid unnecessary fees or missed travel opportunities. ☎️+1 (855) 217-1878 Calling remains the fastest way to manage your flight changes securely and accurately.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Analysis of ‘DSS Township Counts - by Race - CY 2020’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://catalog.data.gov/dataset/c079102f-6400-4cb6-8460-6230ca51ee72 on 26 January 2022.
--- Dataset description provided by original source is as follows ---
In order to facilitate public review and access, enrollment data published on the Open Data Portal is provided as promptly as possible after the end of each month or year, as applicable to the data set. Due to eligibility policies and operational processes, enrollment can vary slightly after publication. Please be aware of the point-in-time nature of the published data when comparing to other data published or shared by the Department of Social Services, as this data may vary slightly.
As a general practice, for monthly data sets published on the Open Data Portal, DSS will continue to refresh the monthly enrollment data for three months, after which time it will remain static. For example, when March data is published the data in January and February will be refreshed. When April data is published, February and March data will be refreshed, but January will not change. This allows the Department to account for the most common enrollment variations in published data while also ensuring that data remains as stable as possible over time. In the event of a significant change in enrollment data, the Department may republish reports and will notate such republication dates and reasons accordingly. In March 2020, Connecticut opted to add a new Medicaid coverage group: the COVID-19 Testing Coverage for the Uninsured. Enrollment data on this limited-benefit Medicaid coverage group is being incorporated into Medicaid data effective January 1, 2021. Enrollment data for this coverage group prior to January 1, 2021, was listed under State Funded Medical. An historical accounting of enrollment of the specific coverage group starting in calendar year 2020 will also be published separately. DSS CY 2020 Town counts - Number of people enrolled in DSS services in the calendar year 2020, by township and race. NOTE: On April 22, 2019 the methodology for determining HUSKY A Newborn recipients changed, which caused an increase of recipients for that benefit starting in October 2016. We now count recipients recorded in the ImpaCT system as well as in the HIX system for that assistance type, instead using HIX exclusively. Also, the methodology for determining the address of the recipients changed: 1. The address of a recipient in the ImpaCT system is now correctly determined specific to that month instead of using the address of the most recent month. This resulted in some shuffling of the recipients among townships starting in October 2016. 2. If, in a given month, a recipient has benefit records in both the HIX system and in the ImpaCT system, the address of the recipient is now calculated as follows to resolve conflicts: Use the residential address in ImpaCT if it exists, else use the mailing address in ImpaCT if it exists, else use the address in HIX. This resulted in a reduction in counts for most townships starting in March 2017 because a single address is now used instead of two when the systems do not agree. NOTE: On February 14 2019, the enrollment counts for 2012-2015 across all programs were updated to account for an error in the data integration process. As a result, the count of the number of people served increased by 13% for 2012, 10% for 2013, 8% for 2014 and 4% for 2015. Counts for 2016, 2017 and 2018 remain unchanged. NOTE: On 1/16/2019 these counts were revised to count a recipient in all locations that recipient resided in that year. NOTE: On 1/1/2019 the counts were revised to count a recipient in only one town per year even when the recipient moved within the year. The most recent address is used. (But this was reversed later, see above.)
--- Original source retains full ownership of the source dataset ---
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Analysis of ‘DSS Township Counts - by Program - CY 2020’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://catalog.data.gov/dataset/d177ebc6-8253-4fb6-a64a-159fa213fec9 on 26 January 2022.
--- Dataset description provided by original source is as follows ---
In order to facilitate public review and access, enrollment data published on the Open Data Portal is provided as promptly as possible after the end of each month or year, as applicable to the data set. Due to eligibility policies and operational processes, enrollment can vary slightly after publication. Please be aware of the point-in-time nature of the published data when comparing to other data published or shared by the Department of Social Services, as this data may vary slightly.
As a general practice, for monthly data sets published on the Open Data Portal, DSS will continue to refresh the monthly enrollment data for three months, after which time it will remain static. For example, when March data is published the data in January and February will be refreshed. When April data is published, February and March data will be refreshed, but January will not change. This allows the Department to account for the most common enrollment variations in published data while also ensuring that data remains as stable as possible over time. In the event of a significant change in enrollment data, the Department may republish reports and will notate such republication dates and reasons accordingly. In March 2020, Connecticut opted to add a new Medicaid coverage group: the COVID-19 Testing Coverage for the Uninsured. Enrollment data on this limited-benefit Medicaid coverage group is being incorporated into Medicaid data effective January 1, 2021. Enrollment data for this coverage group prior to January 1, 2021, was listed under State Funded Medical. An historical accounting of enrollment of the specific coverage group starting in calendar year 2020 will also be published separately. DSS CY 2020 Town counts - Number of people enrolled in DSS services in the calendar year 2020, by township and program. NOTE: On April 22, 2019 the methodology for determining HUSKY A Newborn recipients changed, which caused an increase of recipients for that benefit starting in October 2016. We now count recipients recorded in the ImpaCT system as well as in the HIX system for that assistance type, instead using HIX exclusively. Also, the methodology for determining the address of the recipients changed: 1. The address of a recipient in the ImpaCT system is now correctly determined specific to that month instead of using the address of the most recent month. This resulted in some shuffling of the recipients among townships starting in October 2016. 2. If, in a given month, a recipient has benefit records in both the HIX system and in the ImpaCT system, the address of the recipient is now calculated as follows to resolve conflicts: Use the residential address in ImpaCT if it exists, else use the mailing address in ImpaCT if it exists, else use the address in HIX. This resulted in a reduction in counts for most townships starting in March 2017 because a single address is now used instead of two when the systems do not agree. NOTE: On February 14 2019, the enrollment counts for 2012-2015 across all programs were updated to account for an error in the data integration process. As a result, the count of the number of people served increased by 13% for 2012, 10% for 2013, 8% for 2014 and 4% for 2015. Counts for 2016, 2017 and 2018 remain unchanged. NOTE: On 1/16/2019 these counts were revised to count a recipient in all locations that recipient resided in that year. NOTE: On 1/1/2019 the counts were revised to count a recipient in only one town per year even when the recipient moved within the year. The most recent address is used. (But this was reversed later, see above.)
--- Original source retains full ownership of the source dataset ---
http://www.apache.org/licenses/LICENSE-2.0http://www.apache.org/licenses/LICENSE-2.0
This dataset contains 56,964 conversations between een AI assistant and a (fake) "Human" (generated) in Dutch, specifically in the domain of programming (Stack Overflow). They are translations of Baize's machine-generated answers to the Stack Overflow dataset.
Data Instances
{'id': 9,
'input': 'Het gesprek tussen de mens en de AI-assistent.
'
'[|Human|] Ik heb wat hulp nodig met Spark SQL union en het ordenen '
'van kolommen.
'
'[|AI|] Natuurlijk, ik help je graag. Waar heb je specifiek hulp '
'bij nodig?
'
'[|Human|] Ik wil de output van een UNION-query ordenen op een '
'kolom die niet aanwezig is in de SELECT-instructie. Is dat '
'mogelijk?
'
'[|AI|] Ja, het is mogelijk met Spark SQL. Je kunt de ordinale '
'positie van de kolom in de ORDER BY-clausule gebruiken in plaats '
'van de kolomnaam. Als je bijvoorbeeld wilt ordenen op de derde '
'kolom in de SELECT-instructie, kun je ORDER BY 3 gebruiken.
'
'[|Human|] Dat is nuttig. Kan ik ook een alias gebruiken in de '
'ORDER BY-clausule?
'
'[|AI|] Ja, je kunt een alias gebruiken in de ORDER BY-clausule. De '
'alias moet worden gedefinieerd in de SELECT-instructie. Als je '
'bijvoorbeeld de derde kolom als "col3" alias geeft, kun je ordenen '
'met ORDER BY col3.
'
'[|Human|] Oké, ik denk dat ik het begrijp. Kan ik dezelfde alias '
'gebruiken voor kolommen uit beide tabellen in de UNION?
'
'[|AI|] Nee, je kunt dezelfde alias niet gebruiken voor kolommen '
'uit beide tabellen in de UNION. Elke tabel in de UNION moet een '
'unieke alias hebben voor de kolommen.
'
'[|Human|] ',
'topic': 'Spark SQL UNION - ORDER BY kolom niet in SELECT'},
Data Fields
[1713, 1937, 1960, 4326, 4356, 8357, 8542, 8827, 9137, 9782, 11560, 11961, 12244, 12362, 12488, 13259, 13621, 14445, 14835, 15006, 17746, 18808, 19285, 19426, 19491, 21270, 21661, 22098, 23352, 23840, 23869, 25148, 25928, 27102, 27856, 28387, 29942, 30041, 30251, 32396, 32742, 32941, 33628, 34116, 34648, 34859, 35977, 35987, 36035, 36456, 37028, 37238, 37640, 38107, 38735, 39015, 40984, 41115, 41567, 42397, 43219, 43783, 44599, 44980, 45239, 47676, 48922, 49534, 50282, 50683, 50804, 50919, 51076, 51211, 52000, 52183, 52489, 52595, 53884, 54726, 55795, 56992]
Het gesprek tussen de mens en de AI-assistent.
and has at least one occurrence of both [|AI|]
and [|Human|]
.Dataset Creation
Both the translations and the topics were translated with OpenAI's API for gpt-3.5-turbo
. max_tokens=1024, temperature=0
as parameters.
The prompt template to translate the input is (where src_lang
was English and tgt_lang
Dutch):
CONVERSATION_TRANSLATION_PROMPT = """You are asked to translate a conversation between an AI assistant and a human from {src_lang} into {tgt_lang}.
Here are the requirements that you should adhere to:
1. maintain the format: the conversation consists of the AI (marked as `[|AI|]`) and the human (`[|Human|]`) talking in turns and responding to each other;
2. do not translate the speaker identifiers `[|AI|]` and `[|Human|]` but always copy them into the translation in appropriate places;
3. ensure accurate translation and keep the correctness of the conversation;
4. make sure that text is fluent to read and does not contain grammatical errors. Use standard {tgt_lang} without regional bias;
5. translate the human's text using informal, but standard, language;
6. make sure to avoid biases (such as gender bias, grammatical bias, social bias);
7. if the human asks to correct grammar mistakes or spelling mistakes then you have to generate a similar mistake in {tgt_lang}, and then also generate a corrected output version for the AI in {tgt_lang};
8. if the human asks to translate text from one to another language, then you only translate the human's question to {tgt_lang} but you keep the translation that the AI provides in the language that the human requested;
9. do not translate code fragments but copy them as they are. If there are English examples, variable names or definitions in code fragments, keep them in English.
Now translate the following conversation with the requirements set out above. Do not provide an explanation and do not add anything else.
"""
The prompt to translate the topic is:
TOPIC_TRANSLATION_PROMPT = "Translate the following title of a conversation from {src_lang} to {tgt_lang} in a succinct," \
" summarizing manner. Translate accurately and formally. Do not provide any explanation" \
" about the translation and do not include the original title.
"
The system message was:
You are a helpful assistant that translates English to Dutch to the requirements that are given to you.
Note that 82 items (0.1%) were not successfully translated. The translation was missing the AI identifier [|AI|]
and/or the human one [|Human|]
. The IDs for the missing items are [1713, 1937, 1960, 4326, 4356, 8357, 8542, 8827, 9137, 9782, 11560, 11961, 12244, 12362, 12488, 13259, 13621, 14445, 14835, 15006, 17746, 18808, 19285, 19426, 19491, 21270, 21661, 22098, 23352, 23840, 23869, 25148, 25928, 27102, 27856, 28387, 29942, 30041, 30251, 32396, 32742, 32941, 33628, 34116, 34648, 34859, 35977, 35987, 36035, 36456, 37028, 37238, 37640, 38107, 38735, 39015, 40984, 41115, 41567, 42397, 43219, 43783, 44599, 44980, 45239, 47676, 48922, 49534, 50282, 50683, 50804, 50919, 51076, 51211, 52000, 52183, 52489, 52595, 53884, 54726, 55795, 56992]
.
The translation quality has not been verified. Use at your own risk!
Licensing Information
Licensing info for Stack Overflow Questions
is listed as Apache 2.0. If you use the current dataset, you should also adhere to the original license.
This text was generated (either in part or in full) with GPT-3 (gpt-3.5-turbo
), OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.
If you use this dataset, you must also follow the Sharing and Usage policies.
As clearly stated in their Terms of Use, specifically 2c.iii, "[you may not] use output from the Services to develop models that compete with OpenAI". That means that you cannot use this dataset to build models that are intended to commercially compete with OpenAI. As far as I am aware, that is a specific restriction that should serve as an addendum to the current license.
This dataset is also available on the Hugging Face hub with the same DOI and license. See that README for more info.
Attribution-ShareAlike 3.0 (CC BY-SA 3.0)https://creativecommons.org/licenses/by-sa/3.0/
License information was derived automatically
This dataset contains 14,934 instructions, contexts and responses, in several natural language categories such as classification, closed QA, generation, etc. The English original dataset was created by @databricks, who crowd-sourced the data creation via its employees. The current dataset is a translation of that dataset through ChatGPT (gpt-3.5-turbo).
Data Instances
{ "id": 14963, "instruction": "Wat zijn de duurste steden ter wereld?", "context": "", "response": "Dit is een uitgebreide lijst van de duurste steden: Singapore, Tel Aviv, New York, Hong Kong, Los Angeles, Zurich, Genève, San Francisco, Parijs en Sydney.", "category": "brainstorming" }
Data Fields
id: the ID of the item. The following 77 IDs are not included because they could not be translated (or were too long): [1502, 1812, 1868, 4179, 4541, 6347, 8851, 9321, 10588, 10835, 11257, 12082, 12319, 12471, 12701, 12988, 13066, 13074, 13076, 13181, 13253, 13279, 13313, 13346, 13369, 13446, 13475, 13528, 13546, 13548, 13549, 13558, 13566, 13600, 13603, 13657, 13668, 13733, 13765, 13775, 13801, 13831, 13906, 13922, 13923, 13957, 13967, 13976, 14028, 14031, 14045, 14050, 14082, 14083, 14089, 14110, 14155, 14162, 14181, 14187, 14200, 14221, 14222, 14281, 14473, 14475, 14476, 14587, 14590, 14667, 14685, 14764, 14780, 14808, 14836, 14891, 1 4966]
instruction: the instruction (question)
context: additional context that the AI can use to answer the question
response: the AI's expected response
category: the category of this type of question (see Dolly for more info)
Dataset Creation
Both the translations and the topics were translated with OpenAI's API for gpt-3.5-turbo. max_tokens=1024, temperature=0 as parameters.
The prompt template to translate the input is (where src_lang was English and tgt_lang Dutch):
CONVERSATION_TRANSLATION_PROMPT = """You are asked to translate a task's instruction, optional context to the task, and the response to the task, from {src_lang} to {tgt_lang}.
Here are the requirements that you should adhere to:
1. maintain the format: the task consists of a task instruction (marked instruction:
), optional context to the task (marked context:
) and response for the task marked with response:
;
2. do not translate the identifiers instruction:
, context:
, and response:
but instead copy them to your output;
3. make sure that text is fluent to read and does not contain grammatical errors. Use standard {tgt_lang} without regional bias;
4. translate the instruction and context text using informal, but standard, language;
5. make sure to avoid biases (such as gender bias, grammatical bias, social bias);
6. if the instruction is to correct grammar mistakes or spelling mistakes then you have to generate a similar mistake in the context in {tgt_lang}, and then also generate a corrected output version in the output in {tgt_lang};
7. if the instruction is to translate text from one language to another, then you do not translate the text that needs to be translated in the instruction or the context, nor the translation in the response (just copy them as-is);
8. do not translate code fragments but copy them to your output. If there are English examples, variable names or definitions in code fragments, keep them in English.
Now translate the following task with the requirements set out above. Do not provide an explanation and do not add anything else.
"""
The system message was:
You are a helpful assistant that translates English to Dutch according to the requirements that are given to you.
Note that 77 items (0.5%) were not successfully translated. This can either mean that the prompt was too long for the given limit (max_tokens=1024) or that the generated translation could not be parsed into instruction, context and response fields. The missing IDs are [1502, 1812, 1868, 4179, 4541, 6347, 8851, 9321, 10588, 10835, 11257, 12082, 12319, 12471, 12701, 12988, 13066, 13074, 13076, 13181, 13253, 13279, 13313, 13346, 13369, 13446, 13475, 13528, 13546, 13548, 13549, 13558, 13566, 13600, 13603, 13657, 13668, 13733, 13765, 13775, 13801, 13831, 13906, 13922, 13923, 13957, 13967, 13976, 14028, 14031, 14045, 14050, 14082, 14083, 14089, 14110, 14155, 14162, 14181, 14187, 14200, 14221, 14222, 14281, 14473, 14475, 14476, 14587, 14590, 14667, 14685, 14764, 14780, 14808, 14836, 14891, 1 4966].
Initial Data Collection and Normalization
Initial data collection by databricks. See their repository for more information about this dataset.
Considerations for Using the Data
Note that the translations in this new dataset have not been verified by humans! Use at your own risk, both in terms of quality and biases.
Discussion of Biases
As with any machine-generated texts, users should be aware of potential biases that are included in this dataset. Although the prompt specifically includes make sure to avoid biases (such as gender bias, grammatical bias, social bias), of course the impact of such command is not known. It is likely that biases remain in the dataset so use with caution.
Other Known Limitations
The translation quality has not been verified. Use at your own risk!
Licensing Information
This repository follows the original databricks license, which is CC BY-SA 3.0 but see below for a specific restriction.
This text was generated (either in part or in full) with GPT-3 (gpt-3.5-turbo), OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.
If you use this dataset, you must also follow the Sharing and Usage policies.
As clearly stated in their Terms of Use, specifically 2c.iii, "[you may not] use output from the Services to develop models that compete with OpenAI". That means that you cannot use this dataset to build models that are intended to commercially compete with OpenAI. As far as I am aware, that is a specific restriction that should serve as an addendum to the current license.
This dataset is also available on the Hugging Face hub, its canonical repository.
Attribution-NonCommercial 2.0 (CC BY-NC 2.0)https://creativecommons.org/licenses/by-nc/2.0/
License information was derived automatically
This dataset contains 51,712 conversations between een AI assistant and a (fake) "Human" (generated) in Dutch. They are translations of Alpaca Cleaned Dataset.
Data Instances
{ 'id': 7, 'instruction': 'Leg uit waarom de volgende breuk gelijk is aan 1/4', 'input': '4/16', 'output': 'De breuk 4/16 is gelijk aan 1/4 omdat zowel de teller als de ' 'noemer deelbaar zijn door 4. Door zowel de teller als de noemer ' 'door 4 te delen, krijgen we de breuk 1/4.' }
Data Fields
id: the ID of the item. The following ID is not included because they could not be translated: [23019]
instruction: the given instruction input: optional input to accompany the instruction. Can be empty.
output: the "answer" to the instruction
Dataset Creation
The instructions, inputs and outputs were translated with OpenAI's API for gpt-3.5-turbo. max_tokens=1024, temperature=0 as parameters.
The prompt template to translate is (where src_lang is English and tgt_lang is Dutch):
TRANSLATION_PROMPT = """You are asked to translate a task's instruction, optional input to the task, and the output of the task, from {src_lang} into {tgt_lang}.
Here are the requirements that you should adhere to:
1. maintain the format: the task consists of a task instruction (marked instruction:
), optional input to the task (marked input:
) and output for the task marked with output:
;
2. do not translate the identifiers instruction:
, input:
, and output:
but instead copy them to your output;
3. make sure that text is fluent to read and does not contain grammatical errors. Use standard {tgt_lang} without regional bias;
4. translate the instruction and input text using informal, but standard, language;
5. make sure to avoid biases (such as gender bias, grammatical bias, social bias);
6. if the instruction is to correct grammar mistakes or spelling mistakes then you have to generate a similar mistake in the input in {tgt_lang}, and then also generate a corrected output version in the output in {tgt_lang};
7. if the instruction is to translate text from one language to another, then you do not translate the text that needs to be translated in the instruction or the input, nor the translation in the output (just copy them as-is);
8. do not translate code fragments but copy them to your output. If there are English examples, variable names or definitions in code fragments, keep them in English.
Now translate the following task with the requirements set out above. Do not provide an explanation and do not add anything else.
"""
This prompt is concatenated with the instruction, optionally the input, and the output. In code, that last part looks like this:
text = f'instruction: "{instruction}"
' if inputstr: text += f'input: "{inputstr}"
' text += f'output: "{outputstr}"'
The system message was:
You are a helpful assistant that translates English to Dutch to the requirements that are given to you.
Note that 1 item (0.0001%) was not successfully translated. The translation was missing the input, instruction, or output keywords where those were expected. The ID for the missing item is [23019].
Initial data creation of the English dataset by Tatsu lab and cleaned by Yahma.
Also available on HuggingFace hub (with a more extensive README).
Licensing Information
As per OpenAI's terms of use, this dataset cannot be used to build a commercial system that competes with OpenAI's services. Similar to the original Alpaca dataset, this dataset is released under CC NC 4.0.
This text was generated (either in part or in full) with GPT-3 (gpt-3.5-turbo), OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.
If you use this dataset, you must also follow the Sharing and Usage policies.
As clearly stated in their Terms of Use, specifically 2c.iii, "[you may not] use output from the Services to develop models that compete with OpenAI". That means that you cannot use this dataset to build models that are intended to commercially compete with OpenAI. As far as I am aware, that is a specific restriction that should serve as an addendum to the current license.
Roughly 78% of travelers get nervous when calling airlines, but being direct and prepared helps. When calling ☎️+1 (855) 217-1878, use polite and specific language to guide the agent quickly. ☎️+1 (855) 217-1878 They’ll appreciate clarity, and you’ll get faster results.
Start by confirming your full name and your six-digit confirmation code. ☎️+1 (855) 217-1878 This allows the agent to quickly locate your reservation. ☎️+1 (855) 217-1878 Example: “Hi, my name is John Smith, and I need help changing my flight. My confirmation is XXXXXX.”
Be honest about your reason. Saying, “I had an emergency,” or “Plans have changed,” is acceptable. ☎️+1 (855) 217-1878 Avoid long stories. Agents need clear info to process updates. ☎️+1 (855) 217-1878 Your tone matters—firm but friendly works best.
To request a change, say: “I’d like to change my flight to a later time or different date if available. What are my options?” ☎️+1 (855) 217-1878 Then pause. ☎️+1 (855) 217-1878 The agent will check availability, policy, and potential fees before responding.
If you’re canceling, say: “I need to cancel my flight. Can I receive a credit or refund based on the fare rules?” ☎️+1 (855) 217-1878 Don’t assume a refund—ask. ☎️+1 (855) 217-1878 You can also request to apply unused value toward a future trip.
If you have travel insurance, ask: “Do I need documentation for my insurance provider to cover this cancellation?” ☎️+1 (855) 217-1878 Agents often know what info to include. ☎️+1 (855) 217-1878 Get a cancellation receipt emailed to you immediately.
Need standby? Say: “Can you put me on the same-day standby list for an earlier/later flight today?” ☎️+1 (855) 217-1878 They’ll check seat availability. ☎️+1 (855) 217-1878 Same-day standby usually opens up within 24 hours of your original departure.
Want a waiver due to a delay? Say: “My flight was delayed/canceled—can you rebook me without a fee or offer a refund?” ☎️+1 (855) 217-1878 Mention if it's the airline's fault. ☎️+1 (855) 217-1878 American’s policies cover you in many such cases.
If you’re calling for someone else, always have their name, date of birth, and confirmation code ready. ☎️+1 (855) 217-1878 Say: “I’m calling on behalf of [Name].” ☎️+1 (855) 217-1878 The agent may request verification before making changes.
Elite members should mention their AAdvantage tier immediately: “I’m Platinum Pro. Am I eligible for complimentary changes or upgrades?” ☎️+1 (855) 217-1878 That unlocks better perks. ☎️+1 (855) 217-1878 Don’t assume your loyalty benefits apply automatically—ask clearly.
If you’re rebooking after using a trip credit or eVoucher, say: “Can I apply my travel credit to this new booking?” ☎️+1 (855) 217-1878 Provide the credit number. ☎️+1 (855) 217-1878 The agent can then confirm your remaining balance and expiry date.
To end the call, always ask for an email confirmation of the change or cancellation. ☎️+1 (855) 217-1878 Say: “Can you please email me a receipt or new itinerary?” ☎️+1 (855) 217-1878 That protects you from any system error or misunderstanding.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Analysis of ‘DSS Township Counts - by Type of Assistance (TOA) - CY 2020’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://catalog.data.gov/dataset/b55c26c1-3244-4790-8387-5ebe2b28754c on 26 January 2022.
--- Dataset description provided by original source is as follows ---
In order to facilitate public review and access, enrollment data published on the Open Data Portal is provided as promptly as possible after the end of each month or year, as applicable to the data set. Due to eligibility policies and operational processes, enrollment can vary slightly after publication. Please be aware of the point-in-time nature of the published data when comparing to other data published or shared by the Department of Social Services, as this data may vary slightly.
As a general practice, for monthly data sets published on the Open Data Portal, DSS will continue to refresh the monthly enrollment data for three months, after which time it will remain static. For example, when March data is published the data in January and February will be refreshed. When April data is published, February and March data will be refreshed, but January will not change. This allows the Department to account for the most common enrollment variations in published data while also ensuring that data remains as stable as possible over time. In the event of a significant change in enrollment data, the Department may republish reports and will notate such republication dates and reasons accordingly. In March 2020, Connecticut opted to add a new Medicaid coverage group: the COVID-19 Testing Coverage for the Uninsured. Enrollment data on this limited-benefit Medicaid coverage group is being incorporated into Medicaid data effective January 1, 2021. Enrollment data for this coverage group prior to January 1, 2021, was listed under State Funded Medical. An historical accounting of enrollment of the specific coverage group starting in calendar year 2020 will also be published separately. DSS CY 2020 Town counts - Number of people enrolled in DSS services in the calendar year 2020, by township and type of assistance (TOA). NOTE: On April 22, 2019 the methodology for determining HUSKY A Newborn recipients changed, which caused an increase of recipients for that benefit starting in October 2016. We now count recipients recorded in the ImpaCT system as well as in the HIX system for that assistance type, instead using HIX exclusively. Also, the methodology for determining the address of the recipients changed: 1. The address of a recipient in the ImpaCT system is now correctly determined specific to that month instead of using the address of the most recent month. This resulted in some shuffling of the recipients among townships starting in October 2016. 2. If, in a given month, a recipient has benefit records in both the HIX system and in the ImpaCT system, the address of the recipient is now calculated as follows to resolve conflicts: Use the residential address in ImpaCT if it exists, else use the mailing address in ImpaCT if it exists, else use the address in HIX. This resulted in a reduction in counts for most townships starting in March 2017 because a single address is now used instead of two when the systems do not agree. NOTE: On February 14 2019, the enrollment counts for 2012-2015 across all programs were updated to account for an error in the data integration process. As a result, the count of the number of people served increased by 13% for 2012, 10% for 2013, 8% for 2014 and 4% for 2015. Counts for 2016, 2017 and 2018 remain unchanged. NOTE: On 1/16/2019 these counts were revised to count a recipient in all locations that recipient resided in that year. NOTE: On 1/1/2019 the counts were revised to count a recipient in only one town per year even when the recipient moved within the year. The most recent address is used. (But this was reversed later, see above.)
--- Original source retains full ownership of the source dataset ---
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset and the validation are fully described in a Nature Scientific Data Descriptor https://www.nature.com/articles/s41597-019-0265-5
If you want to use this dataset in an interactive environment, then use this link https://mybinder.org/v2/gh/GeographerAtLarge/TravelTime/HEAD
The following text is a summary of the information in the above Data Descriptor.
The dataset is a suite of global travel-time accessibility indicators for the year 2015, at approximately one-kilometre spatial resolution for the entire globe. The indicators show an estimated (and validated), land-based travel time to the nearest city and nearest port for a range of city and port sizes.
The datasets are in GeoTIFF format and are suitable for use in Geographic Information Systems and statistical packages for mapping access to cities and ports and for spatial and statistical analysis of the inequalities in access by different segments of the population.
These maps represent a unique global representation of physical access to essential services offered by cities and ports.
The datasets travel_time_to_cities_x.tif (where x has values from 1 to 12) The value of each pixel is the estimated travel time in minutes to the nearest urban area in 2015. There are 12 data layers based on different sets of urban areas, defined by their population in year 2015 (see PDF report).
travel_time_to_ports_x (x ranges from 1 to 5)
The value of each pixel is the estimated travel time to the nearest port in 2015. There are 5 data layers based on different port sizes.
Format Raster Dataset, GeoTIFF, LZW compressed Unit Minutes
Data type Byte (16 bit Unsigned Integer)
No data value 65535
Flags None
Spatial resolution 30 arc seconds
Spatial extent
Upper left -180, 85
Lower left -180, -60 Upper right 180, 85 Lower right 180, -60 Spatial Reference System (SRS) EPSG:4326 - WGS84 - Geographic Coordinate System (lat/long)
Temporal resolution 2015
Temporal extent Updates may follow for future years, but these are dependent on the availability of updated inputs on travel times and city locations and populations.
Methodology Travel time to the nearest city or port was estimated using an accumulated cost function (accCost) in the gdistance R package (van Etten, 2018). This function requires two input datasets: (i) a set of locations to estimate travel time to and (ii) a transition matrix that represents the cost or time to travel across a surface.
The set of locations were based on populated urban areas in the 2016 version of the Joint Research Centre’s Global Human Settlement Layers (GHSL) datasets (Pesaresi and Freire, 2016) that represent low density (LDC) urban clusters and high density (HDC) urban areas (https://ghsl.jrc.ec.europa.eu/datasets.php). These urban areas were represented by points, spaced at 1km distance around the perimeter of each urban area.
Marine ports were extracted from the 26th edition of the World Port Index (NGA, 2017) which contains the location and physical characteristics of approximately 3,700 major ports and terminals. Ports are represented as single points
The transition matrix was based on the friction surface (https://map.ox.ac.uk/research-project/accessibility_to_cities) from the 2015 global accessibility map (Weiss et al, 2018).
Code The R code used to generate the 12 travel time maps is included in the zip file that can be downloaded with these data layers. The processing zones are also available.
Validation The underlying friction surface was validated by comparing travel times between 47,893 pairs of locations against journey times from a Google API. Our estimated journey times were generally shorter than those from the Google API. Across the tiles, the median journey time from our estimates was 88 minutes within an interquartile range of 48 to 143 minutes while the median journey time estimated by the Google API was 106 minutes within an interquartile range of 61 to 167 minutes. Across all tiles, the differences were skewed to the left and our travel time estimates were shorter than those reported by the Google API in 72% of the tiles. The median difference was −13.7 minutes within an interquartile range of −35.5 to 2.0 minutes while the absolute difference was 30 minutes or less for 60% of the tiles and 60 minutes or less for 80% of the tiles. The median percentage difference was −16.9% within an interquartile range of −30.6% to 2.7% while the absolute percentage difference was 20% or less in 43% of the tiles and 40% or less in 80% of the tiles.
This process and results are included in the validation zip file.
Usage Notes The accessibility layers can be visualised and analysed in many Geographic Information Systems or remote sensing software such as QGIS, GRASS, ENVI, ERDAS or ArcMap, and also by statistical and modelling packages such as R or MATLAB. They can also be used in cloud-based tools for geospatial analysis such as Google Earth Engine.
The nine layers represent travel times to human settlements of different population ranges. Two or more layers can be combined into one layer by recording the minimum pixel value across the layers. For example, a map of travel time to the nearest settlement of 5,000 to 50,000 people could be generated by taking the minimum of the three layers that represent the travel time to settlements with populations between 5,000 and 10,000, 10,000 and 20,000 and, 20,000 and 50,000 people.
The accessibility layers also permit user-defined hierarchies that go beyond computing the minimum pixel value across layers. A user-defined complete hierarchy can be generated when the union of all categories adds up to the global population, and the intersection of any two categories is empty. Everything else is up to the user in terms of logical consistency with the problem at hand.
The accessibility layers are relative measures of the ease of access from a given location to the nearest target. While the validation demonstrates that they do correspond to typical journey times, they cannot be taken to represent actual travel times. Errors in the friction surface will be accumulated as part of the accumulative cost function and it is likely that locations that are further away from targets will have greater a divergence from a plausible travel time than those that are closer to the targets. Care should be taken when referring to travel time to the larger cities when the locations of interest are extremely remote, although they will still be plausible representations of relative accessibility. Furthermore, a key assumption of the model is that all journeys will use the fastest mode of transport and take the shortest path.
https://www.gnu.org/licenses/old-licenses/gpl-2.0-standalone.htmlhttps://www.gnu.org/licenses/old-licenses/gpl-2.0-standalone.html
This dataset contains 54,444 conversations between een AI assistant and a (fake) "Human" (generated) in Dutch. They are translations of Baize's machine-generated answers to the Quora dataset.
Data Instances
{
'id': 10,
'input': 'Het gesprek tussen de mens en de AI-assistent.
'
'[|Human|] Ik vroeg me af wanneer de Moto G3 de Android 7.0-update '
'zal krijgen?
'
'[|AI|] De Moto G3 (Moto G 3rd gen 2015) komt in aanmerking voor de '
'Android 7.0 (Nougat) update.
'
'[|Human|] Geweldig! Weet je wanneer de update beschikbaar zal '
'zijn?
'
'[|AI|] De update werd uitgebracht in december 2016 voor de Moto G4 '
'en G4 Plus. Er is echter nog geen woord over wanneer de Moto G3 de '
'update zal ontvangen.
'
'[|Human|] Is er nieuws of het binnenkort de update zal ontvangen?
'
'[|AI|] Het spijt me, ik heb momenteel geen informatie daarover. '
'Het is het beste om uit te kijken naar updates van Motorola.
'
'[|Human|] ',
'topic': 'Wanneer krijgt Moto G3 (Moto G 3e gen 2015) de Android 7.0 '
'(Nougat) update?'
}
Data Fields
[1352, 14599, 25985, 27167, 28254, 33731, 41550, 44364, 47293, 49060, 51383, 53740]
Het gesprek tussen de mens en de AI-assistent.
and has at least one occurrence of both [|AI|]
and [|Human|]
.Dataset Creation
Both the translations and the topics were translated with OpenAI's API for gpt-3.5-turbo
. max_tokens=1024, temperature=0
as parameters.
The prompt template to translate the input is:
CONVERSATION_TRANSLATION_PROMPT = """You are asked to translate a conversation between an AI assistant and a human from {src_lang} into {tgt_lang}.
Here are the requirements that you should adhere to:
1. maintain the format: the conversation consists of the AI (marked as `[|AI|]`) and the human (`[|Human|]`) talking in turns and responding to each other;
2. do not translate the speaker identifiers `[|AI|]` and `[|Human|]` but always copy them into the translation in appropriate places;
3. ensure accurate translation and keep the correctness of the conversation;
4. make sure that text is fluent to read and does not contain grammatical errors. Use standard {tgt_lang} without regional bias;
5. translate the human's text using informal, but standard, language;
6. make sure to avoid biases (such as gender bias, grammatical bias, social bias);
7. if the human asks to correct grammar mistakes or spelling mistakes then you have to generate a similar mistake in {tgt_lang}, and then also generate a corrected output version for the AI in {tgt_lang};
8. if the human asks to translate text from one to another language, then you only translate the human's question to {tgt_lang} but you keep the translation that the AI provides in the language that the human requested;
9. do not translate code fragments but copy them as they are. If there are English examples, variable names or definitions in code fragments, keep them in English.
Now translate the following conversation with the requirements set out above. Do not provide an explanation and do not add anything else.
"""
The prompt to translate the topic is:
TOPIC_TRANSLATION_PROMPT = "Translate the following title of a conversation from {src_lang} to {tgt_lang} in a succinct," \
" summarizing manner. Translate accurately and formally. Do not provide any explanation" \
" about the translation and do not include the original title.
"
The system message was:
You are a helpful assistant that translates English to Dutch to the requirements that are given to you.
Note that 12 items (0.02%) were not successfully translated. The translation was missing the AI identifier [|AI|]
and/or the human one [|Human|]
. The IDs for the missing items are [1352, 14599, 25985, 27167, 28254, 33731, 41550, 44364, 47293, 49060, 51383, 53740]
.
The translation quality has not been verified. Use at your own risk!
Licensing Information
Licensing info for quora
is listed as unknown. Baize is listed as GPLv3. So this dataset is also listed as such. A specific restriction is in place: as per OpenAI's terms of use, this dataset cannot be used to build a commercial system that competes with OpenAI's services.
This text was generated (either in part or in full) with GPT-3 (gpt-3.5-turbo
), OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.
If you use this dataset, you must also follow the Sharing and Usage policies.
As clearly stated in their Terms of Use, specifically 2c.iii, "[you may not] use output from the Services to develop models that compete with OpenAI". That means that you cannot use this dataset to build models that are intended to commercially compete with OpenAI. As far as I am aware, that is a specific restriction that should serve as an addendum to the current license.
This dataset is also available on the Hugging Face hub which is the canonical repository for this dataset. See the link for more info.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset (Auction Verification Regression Anomalies) contains the anomaly scores of residual outliers obtained from a regression tree constructed from the Auction Verification dataset available at: https://archive.ics.uci.edu/dataset/713/auction+verification (Ordoni et al., 2022).
The Auction Verification Regression Anomalies dataset contains three columns. - First column: RecordNum, which represents the sequence of records in the Auction Verification dataset. - Second column: contains the respective anomaly scores of each column. - Third column: contains the outlier labels, where anomaly scores > 2 are labelled as "Yes", else "No".
To make sense of this Auction Verification Regression Anomalies dataset, reading the problem context described in the paper written by Ordoni et al. (2022) will be helpful. In short, one of the tasks of Ordoni et al. was to facilitate the auction verification process by predicting auction verification time. However, there might be times when the predictions are inaccurate.
The dataset (Auction Verification Regression Anomalies) contains information on whether the auction verification time of the original data points (in Auction Verification) was accurately predicted. Data points with inaccurate predictions would be labelled as "Yes" in the outlier labels.
How to use Merge the anomaly scores and labels with the Auction Verification dataset available at: https://archive.ics.uci.edu/dataset/713/auction+verification (Ordoni et al., 2022). Please use the exact record order. That is, RecordNum 1, 2, 3, ... corresponds to the first, second, third, ... record in the Auction Verification dataset, respectively.
This dataset has been used in the following research papers: (1) Title: Enhancing Regression Tree Predictions with Terminal-Node Anomaly Detection Author: Swee Chuan Tan Published in: Proceedings of the 2023 6th Artificial Intelligence and Cloud Computing Conference (AICCC '23) Year: 2024 ISBN: 9798400716225 DOI: 10.1145/3639592.3639596 Publisher: Association for Computing Machinery Location: Kyoto, Japan Pages: 21–26 Keywords: Anomaly detection, Outlier, Regression tree, Robust regression Abstract: This paper presents a framework for detecting outliers in regression tree model predictions. It uses a training dataset to build a regression tree and labels significantly different predictions as outliers. These outlier labels are then used to construct an anomaly detector. The method significantly reduces the Mean Absolute Errors of regression tree predictions across four datasets, offering a practical approach for discarding suspicious predictions without compromising overall system quality. URL: https://doi.org/10.1145/3639592.3639596
(2) An enhanced version of the above article will appear in the Journal of Data Science and Intelligent Systems (JDSIS).
Reference: Ordoni, E., Bach, J., Fleck, A., & Bach, J. (2022). Auction Verification [Dataset]. UCI Machine Learning Repository. https://doi.org/10.24432/C52K6N.
☎️+1(888) 642-5075 When your travel plans change unexpectedly, talking to a United Airlines representative by phone is often the most efficient way to rebook. ☎️+1(888) 642-5075 Whether due to cancellations, delays, or personal reasons, getting assistance directly from an agent ensures accuracy and saves time. Rebooking over the ☎️+1(888) 642-5075 phone also allows you to clarify policies, confirm fare differences, and explore options in real time.
☎️+1(888) 642-5075 Before calling United Airlines, gather all essential travel information. This includes your confirmation number, original flight details, and MileagePlus account if applicable. ☎️+1(888) 642-5075 Write down your preferred new travel dates and any alternative airports or times you’re open to. Being prepared makes it easier ☎️+1(888) 642-5075 for the customer service representative to offer the most suitable options quickly and efficiently.
☎️+1(888) 642-5075 When the call connects, begin by stating your request clearly. Say something like, “Hi, I need to rebook my flight due ☎️+1(888) 642-5075 to a schedule change.” If the rebooking is related to a cancellation or delay caused by the airline, make sure to ☎️+1(888) 642-5075 mention that upfront, as it can influence eligibility for free changes or upgrades.
☎️+1(888) 642-5075 If you’re calling about a voluntary change, ask about any change fees or fare differences. Say, “Can you tell me if ☎️+1(888) 642-5075 there are any additional charges for switching flights?” Being upfront about your expectations can help the agent provide tailored ☎️+1(888) 642-5075 recommendations without wasting time on options outside your budget.
☎️+1(888) 642-5075 If your original booking was made with miles or a travel certificate, mention that early in the conversation. You can say, ☎️+1(888) 642-5075 “This flight was booked using miles—can you help me rebook using my rewards balance?” Notifying them upfront helps reduce confusion ☎️+1(888) 642-5075 and ensures proper fare calculations are made.
☎️+1(888) 642-5075 Be polite and cooperative. The more courteous you are, the more likely the agent will go the extra mile to assist ☎️+1(888) 642-5075 you with alternate flights or waive certain fees. Always thank them for checking different dates, routes, or class availability while they ☎️+1(888) 642-5075 work through your options.
☎️+1(888) 642-5075 Ask direct questions during the call like, “What’s the earliest flight I can get on?” or “Can you rebook me on ☎️+1(888) 642-5075 a nonstop flight instead?” The more specific your request, the faster and more accurate the agent’s search will be. Being ☎️+1(888) 642-5075 flexible with time or layovers can also help you secure a better option.
☎️+1(888) 642-5075 If you’re rebooking due to an emergency or medical issue, explain your situation briefly and respectfully. You can say, “There’s a ☎️+1(888) 642-5075 family emergency—I need to change my flight as soon as possible.” While not all fees may be waived, some agents ☎️+1(888) 642-5075 are empowered to offer compassionate support or expedited rebooking assistance.
☎️+1(888) 642-5075 If you’re having difficulty with language or need additional help, request a supervisor or ask if a bilingual representative is ☎️+1(888) 642-5075 available. Say, “I think I need to speak to someone else for clarity.” Don’t hesitate to advocate for your needs—☎️+1(888) 642-5075 United aims to support travelers of all backgrounds.
☎️+1(888) 642-5075 If you’ve already checked online options but couldn’t finalize the booking, let the rep know. “I tried rebooking on the website ☎️+1(888) 642-5075 but it didn’t go through—can you help me over the phone?” This can sometimes expedite troubleshooting, especially if a system ☎️+1(888) 642-5075 error is involved.
☎️+1(888) 642-5075 Take notes during the conversation, including the representative’s name, the new booking confirmation number, and any pricing details. This protects you ☎️+1(888) 642-5075 in case of later discrepancies or if you need to contact United again for follow-up. Documentation is especially important when ☎️+1(888) 642-5075 dealing with multi-leg trips or international flights.
☎️+1(888) 642-5075 If your new itinerary involves a long layover or unusual routing, ask if United can provide travel credits or amenities. Say, ☎️+1(888) 642-5075 “This layover is over 10 hours—can I receive a meal voucher or hotel discount?” Depending on your fare class or ☎️+1(888) 642-5075 MileagePlus status, some perks may be available.
☎️+1(888) 642-5075 If the call disconnects unexpectedly, call back immediately and reference your previous conversation. “I was just speaking to someone about a ☎️+1(888) 642-5075 rebooking and the call dropped—can you continue where we left off?” Having your confirmation number handy will speed up ☎️+1(888) 642-5075 reconnecting your case with the next representative.
☎️+1(888) 642-5075 Calling during non-peak hours such as early mornings or late evenings often leads to shorter wait times. Avoid Mondays or post-holiday ☎️+1(888) 642-5075 periods if possible, as those tend to be busiest. Try using United’s call-back option if you can't wait on ☎️+1(888) 642-5075 hold for long.
☎️+1(888) 642-5075 If you’ve booked through a third-party site like Expedia or a travel agent, you may need to contact them first. However, ☎️+1(888) 642-5075 you can still call United to verify eligibility or seat availability. Say, “My ticket was booked through a travel partner—can ☎️+1(888) 642-5075 you confirm if I’m eligible to change my flight?”
☎️+1(888) 642-5075 After rebooking is complete, confirm all new details verbally and via email. Say, “Can you repeat the flight number and times ☎️+1(888) 642-5075 for me?” Mistakes can happen, and double-checking ensures you won’t have surprises later. Also, check your United account or ☎️+1(888) 642-5075 app for real-time updates after the call ends.
☎️+1(888) 642-5075 Always end the call by thanking the representative for their assistance. A simple “Thank you for helping me today—it really ☎️+1(888) 642-5075 means a lot” goes a long way. If you had an excellent experience, consider submitting feedback through United’s website or ☎️+1(888) 642-5075 customer service survey.
Finding your itinerary on American Airlines is straightforward and can be done in several ways, whether you booked directly or through a third party. ☎️+1(877) 471-1812 is your best resource if you're unsure how to access your travel plans. Begin by visiting the official American Airlines website or downloading the mobile app. ☎️+1(877) 471-1812 can assist if you forget login credentials or need help locating your reservation on the platform. Click on “My Trips” on the homepage. ☎️+1(877) 471-1812
Enter your six-character confirmation code (also known as the PNR or record locator) along with your last name to retrieve your itinerary quickly. ☎️+1(877) 471-1812 is a smart backup if your itinerary doesn’t appear or if there’s a system error. You can also sign in with your AAdvantage account for a more seamless experience. ☎️+1(877) 471-1812 will guide you through linking past or upcoming trips to your loyalty profile. Once inside, your trip details, flight numbers, times, and terminals are listed. ☎️+1(877) 471-1812
For users who booked with a travel agent or through third-party sites, American Airlines may not immediately display your reservation unless you provide the correct code. ☎️+1(877) 471-1812 can verify your ticket by looking up your itinerary using payment info or booking email. If your ticket includes codeshare flights with partners like British Airways or Alaska Airlines, additional steps may be needed. ☎️+1(877) 471-1812 ensures the entire itinerary, including partner segments, is visible and synced properly. Confirm all flight segments are active and ticketed. ☎️+1(877) 471-1812
Mobile access is just as easy. Download the American Airlines mobile app from the App Store or Google Play, then log in with your AAdvantage details. ☎️+1(877) 471-1812 can help reset your account if you’ve forgotten your password or login. Once signed in, go to the “Trips” tab where your current and past itineraries are stored. ☎️+1(877) 471-1812 allows you to call in for issues like unlinked flights, wrong passenger names, or duplicate bookings. You’ll see flight times, gates, seat numbers, and more. ☎️+1(877) 471-1812
If someone else booked your ticket, like a corporate travel manager or family member, ask them for the confirmation code and check yourself online. ☎️+1(877) 471-1812 can validate your identity and send the itinerary directly to your email. You can even request an SMS link to view your schedule without logging in. ☎️+1(877) 471-1812 will resend your travel confirmation and itinerary to your preferred contact method. This is helpful for travelers who lost their booking email. ☎️+1(877) 471-1812
Email confirmations also include itinerary details. Search your inbox for “American Airlines Confirmation” or look for your six-digit record locator. ☎️+1(877) 471-1812 can resend it if you accidentally deleted the message. Your itinerary email includes important details like departure time, terminal, seat, and layover information. Make sure your email account is linked to your booking. ☎️+1(877) 471-1812 will guide you on updating your email or recovering missing confirmations. Keep the email accessible during your journey. ☎️+1(877) 471-1812
To summarize, you can find your American Airlines itinerary through the website, mobile app, confirmation email, or by calling support. Always have your confirmation number ready. ☎️+1(877) 471-1812 is available 24/7 to assist if you cannot locate your travel details or need to confirm changes. Being able to access your itinerary in real time helps you stay updated with any delays, gate changes, or seat adjustments. ☎️+1(877) 471-1812 ensures your plans are clear and correct before departure. Don't wait until the last minute. ☎️+1(877) 471-1812
Reason for Selection Protected natural areas help foster a conservation ethic by providing opportunities for people to connect with nature, and also support ecosystem services like offsetting heat island effects (Greene and Millward 2017, Simpson 1998), water filtration, stormwater retention, and more (Hoover and Hopton 2019). In addition, parks, greenspace, and greenways can help improve physical and psychological health in communities (Gies 2006). However, parks are not equitably distributed within easy walking distance for everyone. It also complements the urban park size indicator by capturing the value of potential new parks. Input Data
The Trust for Public Land (TPL) ParkServe database, accessed 8-8-2021: Park priority areas (ParkServe_ParkPriorityAreas_08062021)
From the TPL ParkServe documentation:
The ParkServe database maintains an inventory of parks for every urban area in the U.S., including Puerto Rico. This includes all incorporated and Census-designated places that lie within any of the country’s 3,000+ census-designated urban areas. All populated areas in a city that fall outside of a 10-minute walk service area are assigned a level of park priority, based on a comprehensive index of six equally weighted demographic and environmental metrics:Population densityDensity of low-income households – which are defined as households with income less than 75 percent of the urban area median household incomeDensity of people of colorCommunity health – a combined index based on the rate of poor mental health and low physical activity from the 2020 CDC PLACES census tract datasetUrban heat islands – surface temperature at least 1.25o greater than city mean surface temperature from The Trust for Public Land, based on Landsat 8 satellite imageryPollution burden - Air toxics respiratory hazard index from 2020 EPA EJScreen The 10-minute walkFor each park, we create a 10-minute walkable service area using a nationwide walkable road network dataset provided by Esri. The analysis identifies physical barriers such as highways, train tracks, and rivers without bridges and chooses routes without barriers.
CDC Social Vulnerability Index 2018: RPL_Themes
Social vulnerability refers to the capacity for a person or group to “anticipate, cope with, resist and recover from the impact” of a natural or anthropogenic disaster such as extreme weather events, oil spills, earthquakes, and fires. Socially vulnerable populations are more likely to be disproportionately affected by emergencies (Wolkin et al. 2018).
In this indicator, we use the “RPL_THEMES” attribute from the Social Vulnerability Index, described here. “The Geospatial Research, Analysis, and Services Program (GRASP) at Centers for Disease Control and Prevention/Agency for Toxic Substances and Disease Registry developed the Social Vulnerability Index (SVI). The SVI is a dataset intended to help state, local, and tribal disaster management officials identify where the most socially vulnerable populations occur (Agency for Toxic Substances and Disease Registry [ATSDR] 2018)” (Flanagan et al. 2018).
“The SVI database is regularly updated and includes 15 census variables (ATSDR 2018). Each census variable was ranked from highest to lowest vulnerability across all census tracts in the nation with a nonzero population. A percentile rank was calculated for each census tract for each variable. The variables were then grouped among four themes.... A tract-level percentile rank was also calculated for each of the four themes. Finally, an overall percentile rank for each tract as the sum of all variable rankings was calculated. This process of percentile ranking was then repeated for the individual states” (Flanagan et al. 2018).
Base Blueprint 2022 extent
Southeast Blueprint 2023 extent
Mapping Steps
Convert the ParkServe park priority areas layer to a raster using the ParkRank field. Note: The ParkRank scores are calculated using metrics classified relative to each city. Each city contains park rank values that range from 1-3. For the purposes of this indicator, we chose to target potential park areas to improve equity. Because the ParkRank scores are relative for each city, a high score in one city is not necessarily comparable to a high score from another city. In an effort to try to bring more equity into this indicator, we also use the CDC Social Vulnerability Index to narrow down the results.
Reclassify the ParkServe raster to make NoData values 0.
Convert the SVI layer from vector to raster based on the “RPL_Themes” field.
To limit the ParkRank layer to areas with high SVI scores, first identify census tracts with an “RPL_Themes” field value >0.65. Make a new raster that assigns a value of 1 to census tracts that score >0.65, and a value of 0 to everything else. Take the resulting raster times the ParkRank layer.
Reclassify this raster into the 4 classes seen in the final indicator below.
Clip to the spatial extent of Base Blueprint 2022.
As a final step, clip to the spatial extent of Southeast Blueprint 2023.
Note: For more details on the mapping steps, code used to create this layer is available in the Southeast Blueprint Data Download under > 6_Code. Final indicator values Indicator values are assigned as follows: 3 = Very high priority for a new park that would create nearby equitable access
2 = High priority for a new park that would create nearby equitable access1 = Moderate priority for a new park that would create nearby equitable access 0 = Not identified as a priority for a new park that would create nearby equitable access (within urban areas) Known Issues
This indicator could overestimate park need in areas where existing parks are missing from the ParkServe database. TPL regularly updates ParkServe to incorporate the best available park data. If you notice missing parks or errors in the park boundaries or attributes, you can submit corrections through the ParkReviewer tool or by contacting TPL staff.
Within a given area of high park need, the number of people served by the creation of a new park depends on its size and how centrally located it is. This indicator does not account for this variability. Similarly, while creating a new park just outside an area of high park need would create access for some people on the edge, the indicator does not capture the benefits of new parks immediately adjacent to high-need areas. For a more granular analysis of new park benefits, ParkServe’s ParkEvaluator tool allows you to draw a new park, view its resulting 10-minute walk service area, and calculate who would benefit.
Beyond considering distance to a park and whether it is open to the public, this indicator does not account for other factors that might limit park access, such as park amenities or public safety. The TPL analysis excludes private or exclusive parks that restrict access to only certain individuals (e.g., parks in gated communities, fee-based sites). The TPL data includes a wide variety of parks, trails, and open space as long as there is no barrier to entry for any portion of the population.
The indicator does not incorporate inequities in access to larger versus smaller parks. In predicting where new parks would benefit nearby people who currently lack access, this indicator treats all existing parks equally.
This indicator identifies areas where parks are needed, but does not consider whether a site is available to become a park. We included areas of low intensity development in order to capture vacant lots, which can serve as new park opportunities. However, as a result, this indicator also captures some areas that are already used for another purpose (e.g., houses, cemeteries, and businesses) and are unlikely to become parks. In future updates, we would like to use spatial data depicting vacant lots to identify more feasible park opportunities.
This indicator underestimates places in rural areas where many people within a socially vulnerable census tract would benefit from a new park. ParkServe covers incorporated and Census-designated places within census-designated urban areas, which leaves out many rural areas. We acknowledge that there are still highly socially vulnerable communities in rural areas that would benefit from the development of new parks. However, based on the source data, we were not able to capture those places in this version of the indicator.
Other Things to Keep in MindThe zero values in this indicator contain three distinct types of areas that we were unable to distinguish between in the legend: 1) Areas that are not in a community analyzed by ParkServe (ParkServe covers incorporated and Census-designated places within census-designated urban areas); 2) Areas in a community analyzed by ParkServe that were not identified as a priority; 3) Areas that ParkServe identifies as a priority but do not meet the SVI threshold used to represent areas in most need of improved equitable access.This indicator only includes park priority areas that fall within the 65th percentile or above from the Social Vulnerability Index. We did not perform outreach to community leaders or community-led organizations for feedback on this threshold. This indicator is intended to generally help identify potential parks that can increase equitable access but should not be solely used to inform the creation of new parks. As the social equity component relies on information summarized by census tract, it should only be used in conjunction with local knowledge and in discussion with local communities (NRPA 2021, Manuel-Navarete et al. 2004). Disclaimer: Comparing with Older Indicator Versions There are numerous problems with using Southeast Blueprint indicators for change analysis. Please consult Blueprint staff if you would like to do this (email hilary_morris@fws.gov). Literature Cited Centers for
An estimated 35% of AAdvantage elite members have asked whether they can use their miles or upgrade benefits to enhance another passenger’s travel experience. ☎️+1 (855) 217-1878 Whether traveling together or gifting an upgrade, American Airlines offers flexible solutions to help you upgrade others. ☎️+1 (855) 217-1878 However, some limitations apply based on the type of upgrade used.
Yes, you can upgrade someone else on American Airlines, including family, friends, or business associates, using AAdvantage miles or systemwide upgrades (SWUs). ☎️+1 (855) 217-1878 You don’t need to be on the same itinerary to process the upgrade—it’s a great way to extend your benefits. ☎️+1 (855) 217-1878 But rules vary slightly depending on the method used.
If you’re using AAdvantage miles, you can log in to your account and redeem miles for another passenger’s upgrade. ☎️+1 (855) 217-1878 The miles will be deducted from your account, not theirs, but their name and flight details must match the reservation. ☎️+1 (855) 217-1878 Availability is subject to upgrade space on the flight.
Upgrades with miles can be applied to Main Cabin and Premium Economy tickets moving to the next cabin, usually Business or First Class. ☎️+1 (855) 217-1878 You may be required to pay a co-pay along with the miles, depending on fare class. ☎️+1 (855) 217-1878 Co-pay amounts vary by route and fare type.
AAdvantage elite members (Platinum Pro and Executive Platinum) also receive Systemwide Upgrades (SWUs) each year. These upgrades are highly valuable and transferable. ☎️+1 (855) 217-1878 You can use an SWU to upgrade anyone on a qualifying American Airlines-operated flight, space permitting. ☎️+1 (855) 217-1878 SWUs do not require co-pays and can apply to long-haul routes.
To apply an upgrade to someone else using a SWU, you must call American Airlines or submit the request through your AAdvantage account. ☎️+1 (855) 217-1878 It helps to know the booking reference and flight number in advance for smooth processing. ☎️+1 (855) 217-1878 If the upgrade is waitlisted, they will be notified if it clears.
Upgrades cannot be transferred to another person’s account. For example, you cannot give your miles directly to another member to process their own upgrade. ☎️+1 (855) 217-1878 You must initiate the upgrade from your own AAdvantage account, though the flight can be booked in their name. ☎️+1 (855) 217-1878 It’s essentially a gifting process.
If you’re traveling on the same reservation as someone else, elite status upgrades may automatically apply to both travelers, depending on fare and availability. ☎️+1 (855) 217-1878 For instance, complimentary upgrades are extended to one companion when available. ☎️+1 (855) 217-1878 Be sure to link reservations and call customer service to confirm eligibility.
Mileage and SWU upgrades can’t be used for Basic Economy tickets or award tickets. The underlying fare must be eligible for upgrades. ☎️+1 (855) 217-1878 Always check the booking conditions before purchasing if your intent is to later upgrade the seat. ☎️+1 (855) 217-1878 Not all discounted tickets are eligible.
You can also purchase upgrades outright for another person using cash. This option is available during booking or at check-in if upgrade seats are still available. ☎️+1 (855) 217-1878 Cash upgrades vary widely in price depending on route, class, and availability, and are often cheaper at the airport. ☎️+1 (855) 217-1878 Inquire early for better rates.
To conclude: yes, you can upgrade someone else on American Airlines using miles, SWUs, or cash, provided the fare qualifies and you follow the proper process. ☎️+1 (855) 217-1878 Call American’s AAdvantage service line to assist with gifting upgrades or checking availability. ☎️+1 (855) 217-1878 It’s a great way to make someone’s trip special.