[PDF&VCE] Pass AWS Certified Solutions Architect – Associate Exam By Exercising Lead2pass Latest AWS Certified Solutions Architect – Associate VCE And PDF Dumps (301-325)

2016 October Amazon Official New Released AWS Certified Solutions Architect – Associate Dumps in L

AWS Certified Solutions Architect – Associate exam dumps,AWS Certified Solutions Architect – Associate exam question,AWS Certified Solutions Architect – Associate pdf dumps,AWS Certified Solutions Architect – Associate vce dumps,AWS Certified Solutions Architect – Associate braindumps,AWS Certified Solutions Architect – Associate study guide, AWS Certified Solutions Architect – Associate practice test

ead2pass.com!

100% Free Download! 100% Pass Guaranteed!

Lead2pass is constantly updating AWS CERTIFIED SOLUTIONS ARCHITECT – ASSOCIATE exam dumps. We will provide our customers with the latest and the most accurate exam questions and answers that cover a comprehensive knowledge point, which will help you easily prepare for AWS CERTIFIED SOLUTIONS ARCHITECT – ASSOCIATE exam and successfully pass your exam. You just need to spend 20-30 hours on studying the exam dumps.

Following questions and answers are all new published by Amazon Official Exam Center: http://www.lead2pass.com/aws-certified-solutions-architect-associate.html

QUESTION 301
A web design company currently runs several FTP servers that their 250 customers use to upload and download large graphic files They wish to move this system to AWS to make it more scalable, but they wish to maintain customer privacy and Keep costs to a minimum.
What AWS architecture would you recommend?

A.    ASK their customers to use an S3 client instead of an FTP client. Create a single S3 bucket Create an IAM user for each customer Put the IAM Users in a Group that has an IAM policy that permits access to sub-directories within the bucket via use of the ‘username’ Policy variable.
B.    Create a single S3 bucket with Reduced Redundancy Storage turned on and ask their customers to use an S3 client instead of an FTP client Create a bucket for each customer with a Bucket Policy that permits access only to that one customer.
C.    Create an auto-scaling group of FTP servers with a scaling policy to automatically scale-in when minimum network traffic on the auto-scaling group is below a given threshold. Load a central list of ftp users from S3 as part of the user Data startup script on each Instance.
D.    Create a single S3 bucket with Requester Pays turned on and ask their customers to use an S3 client instead of an FTP client Create a bucket tor each customer with a Bucket Policy that permits access only to that one customer.

Answer: C
Explanation:
In question we have keywords ‘scalable’ and company wants to ‘move systems’ to aws, which is best suited for Auto-scaling group.

QUESTION 302
A group can contain many users. Can a user belong to multiple groups?

A.    Yes always
B.    No
C.    Yes but only if they are using two factor authentication
D.    Yes but only in VPC

Answer: A
Explanation:
A group can contain many users, and a user can belong to multiple groups.
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html

QUESTION 303
You would like to create a mirror image of your production environment in another region for disaster recovery purposes. Which of the following AWS resources do not need to be recreated in the second region? (Choose 2 answers)

A.    Route 53 Record Sets
B.    IM1 Roles
C.    Elastic IP Addresses (EIP)
D.    EC2 Key Pairs
E.    Launch configurations
F.    Security Groups

Answer: AB
Explanation:
As per the document defined, new IPs should be reserved not the same ones.
Elastic IP Addresses are static IP addresses designed for dynamic cloud computing. Unlike traditional static IP addresses, however, Elastic IP addresses enable you to mask instance or Availability Zone failures by programmatically remapping your public IP addresses to instances in your account in a particular region. For DR, you can also pre-allocate some IP addresses for the most critical systems so that their IP addresses are already known before disaster strikes. This can simplify the execution of the DR plan.

QUESTION 304
Your company runs a customer facing event registration site This site is built with a 3-tier architecture with web and application tier servers and a MySQL database The application requires 6 web tier servers and 6 application tier servers for normal operation, but can run on a minimum of 65% server capacity and a single MySQL database. When deploying this application in a region with three availability zones (AZs) which architecture provides high availability?

A.    A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer), and an application tier deployed across 2 AZs with 3 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB. and one RDS (Relational Database Service) instance deployed with read replicas in the other AZ.
B.    A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each A2 inside an Auto Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 3 AZs with 2 EC2 instances in each AZ inside an Auto Scaling Group behind an ELB and one RDS (Relational Database Service) Instance deployed with read replicas in the two other AZs.
C.    d A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 2 AZs with 3 EC2 instances m each AZ inside an Auto Scaling Group behind an ELS and a Multi-AZ RDS (Relational Database Service) deployment.
D.    A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ Inside an Auto Scaling Group behind an ELB (elastic load balancer). And an application tier deployed across 3 AZs with 2 EC2 instances In each AZ inside an Auto Scaling Group behind an ELB. And a Multi-AZ RDS (Relational Database services) deployment.

Answer: D
Explanation:
http://highscalability.com/blog/2016/1/11/a-beginners-guide-to-scaling-to-11-million-users-on-amazons.html
https://www.airpair.com/aws/posts/building-a-scalable-web-app-on-amazon-web-services-p1

QUESTION 305
Your application is using an ELB in front of an Auto Scaling group of web/application servers deployed across two AZs and a Multi-AZ RDS Instance for data persistence. The database CPU is often above 80% usage and 90% of I/O operations on the database are reads. To improve performance you recently added a single-node Memcached ElastiCache Cluster to cache frequent DB query results. In the next weeks the overall workload is expected to grow by 30%. Do you need to change anything in the architecture to maintain the high availability or the application with the anticipated additional load’* Why?

A.    Yes. you should deploy two Memcached ElastiCache Clusters in different AZs because the ROS Instance will not Be able to handle the load It me cache node fails.
B.    No. if the cache node fails the automated ElastiCache node recovery feature will prevent any availability impact.
C.    Yes you should deploy the Memcached ElastiCache Cluster with two nodes in the same AZ as the RDS DB master instance to handle the load if one cache node fails.
D.    No if the cache node fails you can always get the same data from the DB without having any availability impact.

Answer: B
Explanation:
Answer A is mentioning 2 clusters not 1 cluster (with 2 nodes)
If a node fails due to a hardware fault in an underlying physical server, ElastiCache will provision a new node on a different server.

QUESTION 306
You are responsible for a legacy web application whose server environment is approaching end of life You would like to migrate this application to AWS as quickly as possible, since the application environment currently has the following limitations:
The VM’s single 10GB VMDK is almost full
Me virtual network interface still uses the 10Mbps driver, which leaves your 100Mbps WAN connection completely underutilized
It is currently running on a highly customized. Windows VM within a VMware environment:
You do not have me installation media
This is a mission critical application with an RTO (Recovery Time Objective) of 8 hours. RPO (Recovery Point Objective) of 1 hour. How could you best migrate this application to AWS while meeting your business continuity requirements?

A.    Use the EC2 VM Import Connector for vCenter to import the VM into EC2.
B.    Use Import/Export to import the VM as an ESS snapshot and attach to EC2.
C.    Use S3 to create a backup of the VM and restore the data into EC2.
D.    Use me ec2-bundle-instance API to Import an Image of the VM into EC2

Answer: A
Explanation:
https://aws.amazon.com/developertools/2759763385083070

QUESTION 307
An International company has deployed a multi-tier web application that relies on DynamoDB in a single region For regulatory reasons they need disaster recovery capability In a separate region with a Recovery Time Objective of 2 hours and a Recovery Point Objective of 24 hours They should synchronize their data on a regular basis and be able to provision me web application rapidly using CloudFormation. The objective is to minimize changes to the existing web application, control the throughput of DynamoDB used for the synchronization of data and synchronize only the modified elements. Which design would you choose to meet these requirements?

A.    Use AWS data Pipeline to schedule a DynamoDB cross region copy once a day. create a Lastupdated’ attribute in your DynamoDB table that would represent the timestamp of the last update and use it as a filter.
B.    Use EMR and write a custom script to retrieve data from DynamoDB in the current region using a SCAN operation and push it to QynamoDB in the second region.
C.    Use AWS data Pipeline to schedule an export of the DynamoDB table to S3 in the current region once a day then schedule another task immediately after it that will import data from S3 to DynamoDB in the other region.
D.    Send also each Ante into an SQS queue in me second region; use an auto-scaiing group behind the SQS queue to replay the write in the second region.

Answer: C
Explanation:
Export of dynamo DB is incremental and it will amend the backup with latest changes.

QUESTION 308
Refer to the architecture diagram above of a batch processing solution using Simple Queue Service (SOS) to set up a message queue between EC2 instances which are used as batch processors Cloud Watch monitors the number of Job requests (queued messages) and an Auto Scaling group adds or deletes batch servers automatically based on parameters set in Cloud Watch alarms. You can use this architecture to implement which of the following features in a cost effective and efficient manner?
 
A.    Reduce the overall lime for executing jobs through parallel processing by allowing a busy EC2 instance that receives a message to pass it to the next instance in a daisy-chain setup.
B.    Implement fault tolerance against EC2 instance failure since messages would remain in SQS and worn can continue with recovery of EC2 instances implement fault tolerance against SQS failure by backing up messages to S3.
C.    Implement message passing between EC2 instances within a batch by exchanging messages through SOS.
D.    Coordinate number of EC2 instances with number of job requests automatically thus Improving cost effectiveness.
E.    Handle high priority jobs before lower priority jobs by assigning a priority metadata field to SQS messages.

Answer: D
Explanation:
https://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-using-sqs-queue.html

QUESTION 309
Your company currently has a 2-tier web application running in an on-premises data center. You have experienced several infrastructure failures in the past two months resulting in significant financial losses. Your CIO is strongly agreeing to move the application to AWS. While working on achieving buy-in from the other company executives, he asks you to develop a disaster recovery plan to help improve Business continuity in the short term. He specifies a target Recovery Time Objective (RTO) of 4 hours and a Recovery Point Objective (RPO) of 1 hour or less. He also asks you to implement the solution within 2 weeks. Your database is 200GB in size and you have a 20Mbps Internet connection. How would you do this while minimizing costs?

A.    Create an EBS backed private AMI which includes a fresh install or your application. Setup a script in your data center to backup the local database every 1 hour and to encrypt and copy the resulting file to an S3 bucket using multi-part upload.
B.    Install your application on a compute-optimized EC2 instance capable of supporting the application’s average load synchronously replicate transactions from your on-premises database to a database instance in AWS across a secure Direct Connect connection.
C.    Deploy your application on EC2 instances within an Auto Scaling group across multiple availability zones asynchronously replicate transactions from your on-premises database to a database instance in AWS across a secure VPN connection.
D.    Create an EBS backed private AMI that includes a fresh install of your application. Develop a Cloud Formation template which includes your Mil and the required EC2. Auto-Scaling and ELB resources to support deploying the application across Multiple-Ability Zones. Asynchronously replicate transactions from your on-premises database to a database instance in AWS across a secure VPN connection.

Answer: D

QUESTION 310
An ERP application is deployed across multiple AZs in a single region. In the event of failure, the Recovery Time Objective (RTO) must be less than 3 hours, and the Recovery Point Objective (RPO) must be 15 minutes the customer realizes that data corruption occurred roughly 1.5 hours ago. What DR strategy could be used to achieve this RTO and RPO in the event of this kind of failure?

A.    Take hourly DB backups to S3, with transaction logs stored in S3 every 5 minutes.
B.    Use synchronous database master-slave replication between two availability zones.
C.    Take hourly DB backups to EC2 Instance store volumes with transaction logs stored In S3 every 5 minutes.
D.    Take 15 minute DB backups stored In Glacier with transaction logs stored in S3 every 5 minutes.

Answer: A

QUESTION 311
Your startup wants to implement an order fulfillment process for selling a personalized gadget that needs an average of 3-4 days to produce with some orders taking up to 6 months you expect 10 orders per day on your first day. 1000 orders per day after 6 months and 10,000 orders after 12 months. Orders coming in are checked for consistency men dispatched to your manufacturing plant for production quality control packaging shipment and payment processing If the product does not meet the quality standards at any stage of the process employees may force the process to repeat a step Customers are notified via email about order status and any critical issues with their orders such as payment failure.
Your case architecture includes AWS Elastic Beanstalk for your website with an RDS MySQL instance for customer data and orders.
How can you implement the order fulfillment process while making sure that the emails are delivered reliably?

A.    Add a business process management application to your Elastic Beanstalk app servers and re-use the ROS database for tracking order status use one of the Elastic Beanstalk instances to send emails to customers.
B.    Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 Use the decider instance to send emails to customers.
C.    Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group with min/max=1 use SES to send emails to customers.
D.    Use an SQS queue to manage all process tasks Use an Auto Scaling group of EC2 Instances that poll the tasks and execute them. Use SES to send emails to customers.

Answer: C
Explanation:
http://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_ecommerce_checkout_13.pdf

QUESTION 312
You have deployed a web application targeting a global audience across multiple AWS Regions under the domain name.example.com. You decide to use Route53 Latency-Based Routing to serve web requests to users from the region closest to the user. To provide business continuity in the event of server downtime you configure weighted record sets associated with two web servers in separate Availability Zones per region. Dunning a DR test you notice that when you disable all web servers in one of the regions Route53 does not automatically direct all users to the other region. What could be happening? (Choose 2 answers)

A.    Latency resource record sets cannot be used in combination with weighted resource record sets.
B.    You did not setup an http health check tor one or more of the weighted resource record sets associated with me disabled web servers.
C.    The value of the weight associated with the latency alias resource record set in the region with the disabled servers is higher than the weight for the other region.
D.    One of the two working web servers in the other region did not pass its HTTP health check.
E.    You did not set “Evaluate Target Health” to “Yes” on the latency alias resource record set associated with example com in the region where you disabled the servers.

Answer: BE
Explanation:
http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-complex-configs.html
For both latency alias resource record sets, you set the value of “Evaluate Target Health” to Yes. You use the Evaluate Target Health setting for each latency alias resource record set to make Amazon Route 53 evaluate the health of the alias targets—the weighted resource record sets—and respond accordingly.

QUESTION 313
Your company hosts a social media site supporting users in multiple countries. You have been asked to provide a highly available design tor the application that leverages multiple regions tor the most recently accessed content and latency sensitive portions of the wet) site The most latency sensitive component of the application involves reading user preferences to support web site personalization and ad selection.
In addition to running your application in multiple regions, which option will support this application’s requirements?

A.    Serve user content from S3. CloudFront and use Route53 latency-based routing between ELBs in each region Retrieve user preferences from a local DynamoDB table in each region and leverage SQS to capture changes to user preferences with SOS workers for propagating updates to each table.
B.    Use the S3 Copy API to copy recently accessed content to multiple regions and serve user content from S3. CloudFront with dynamic content and an ELB in each region Retrieve user preferences from an ElasticCache cluster in each region and leverage SNS notifications to propagate user preference changes to a worker node in each region.
C.    Use the S3 Copy API to copy recently accessed content to multiple regions and serve user content from S3 CloudFront and Route53 latency-based routing Between ELBs In each region Retrieve user preferences from a DynamoDB table and leverage SQS to capture changes to user preferences with SOS workers for propagating DynamoDB updates.
D.    Serve user content from S3. CloudFront with dynamic content, and an ELB in each region Retrieve user preferences from an ElastiCache cluster in each region and leverage Simple Workflow (SWF) to manage the propagation of user preferences from a centralized OB to each ElastiCache cluster.

Answer: A
Explanation:
http://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_mediasharing_09.pdf
http://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_adserving_06.pdf

QUESTION 314
Your system recently experienced down time during the troubleshooting process.
You found that a new administrator mistakenly terminated several production EC2 instances. Which of the following strategies will help prevent a similar situation in the future?
The administrator still must be able to:
– launch, start stop, and terminate development resources.
– launch and start production instances.

A.    Create an IAM user, which is not allowed to terminate instances by leveraging production EC2 termination protection.
B.    Leverage resource based tagging along with an IAM user, which can prevent specific users from terminating production EC2 resources.
C.    Leverage EC2 termination protection and multi-factor authentication, which together require users to authenticate before terminating EC2 instances
D.    Create an IAM user and apply an IAM role which prevents users from terminating production EC2 instances.

Answer: B
Explanation:
http://blogs.aws.amazon.com/security/post/Tx29HCT3ABL7LP3/Resource-level-Permissions-for-EC2-Controlling-Management-Access-on-Specific-Ins

QUESTION 315
A customer has established an AWS Direct Connect connection to AWS. The link is up and routes are being advertised from the customer’s end, however the customer is unable to connect from EC2 instances inside its VPC to servers residing in its datacenter. Which of the following options provide a viable solution to remedy this situation? (Choose 2 answers)

A.    Add a route to the route table with an iPsec VPN connection as the target.
B.    Enable route propagation to the virtual pinnate gateway (VGW).
C.    Enable route propagation to the customer gateway (CGW).
D.    Modify the route table of all Instances using the ‘route’ command.
E.    Modify the Instances VPC subnet route table by adding a route back to the customer’s on-premises environment.

Answer: BE
Explanation:
https://myawsscribble.wordpress.com/2015/09/25/setting-up-and-configuring-aws-directconnect/

QUESTION 316
Your company previously configured a heavily used, dynamically routed VPN connection between your on-premises data center and AWS. You recently provisioned a DirectConnect connection and would like to start using the new connection. After configuring DirectConnect settings in the AWS Console, which of the following options win provide the most seamless transition for your users?

A.    Delete your existing VPN connection to avoid routing loops configure your DirectConnect router with the appropriate settings and verity network traffic is leveraging DirectConnect.
B.    Configure your DireclConnect router with a higher 8GP priority man your VPN router, verify network traffic is leveraging Directconnect and then delete your existing VPN connection.
C.    Update your VPC route tables to point to the DirectConnect connection configure your DirectConnect router with the appropriate settings verify network traffic is leveraging DirectConnect and then delete the VPN connection.
D.    Configure your DireclConnect router, update your VPC route tables to point to the DirectConnect connection, configure your VPN connection with a higher BGP pointy. And verify network traffic is leveraging the DirectConnect connection.

Answer: C
Explanation:
Direct Connect takes priority over Dynamically configured VPN connections.

QUESTION 317
A web company is looking to implement an external payment service into their highly available application deployed in a VPC Their application EC2 instances are behind a public lacing ELB Auto scaling is used to add additional instances as traffic increases under normal load the application runs 2 instances in the Auto Scaling group but at peak it can scale 3x in size. The application instances need to communicate with the payment service over the Internet which requires whitelisting of all public IP addresses used to communicate with it. A maximum of 4 whitelisting IP addresses are allowed at a time and can be added through an API.
How should they architect their solution?

A.    Route payment requests through two NAT instances setup for High Availability and whitelist the Elastic IP addresses attached to the MAT instances.
B.    Whitelist the VPC Internet Gateway Public IP and route payment requests through the Internet Gateway.
C.    Whitelist the ELB IP addresses and route payment requests from the Application servers through the ELB.
D.    Automatically assign public IP addresses to the application instances in the Auto Scaling group and run a script on boot that adds each instances public IP address to the payment validation whitelist API.

Answer: A
Explanation:
B is incorrect as you do not have insight into the public ip associated with a VPC Internet Gateways.
C is incorrect as ELB receives a public DNS name.
D would exceed the maximum of 4 whitelisting IP addresses.

QUESTION 318
You are designing the network infrastructure for an application server in Amazon VPC Users will access all the application instances from the Internet as well as from an on-premises network The on-premises network is connected to your VPC over an AWS Direct Connect link. How would you design routing to meet the above requirements?

A.    Configure a single routing Table with a default route via the Internet gateway Propagate a default route via BGP on the AWS Direct Connect customer router Associate the routing table with all VPC subnets.
B.    Configure a single routing table with a default route via the internet gateway Propagate specific routes for the on-premises networks via BGP on the AWS Direct Connect customer router Associate the routing table with all VPC subnets.
C.    Configure a single routing table with two default routes: one to the internet via an Internet gateway the other to the on-premises network via the VPN gateway use this routing table across all subnets in your VPC.
D.    Configure two routing tables one that has a default route via the Internet gateway and another that has a default route via the VPN gateway Associate both routing tables with each VPC subnet.

Answer: B

QUESTION 319
You are implementing AWS Direct Connect. You intend to use AWS public service end points such as Amazon S3, across the AWS Direct Connect link. You want other Internet traffic to use your existing link to an Internet Service Provider.
What is the correct way to configure AWS Direct connect for access to services such as Amazon S3?

A.    Configure a public Interface on your AWS Direct Connect link Configure a static route via your AWS Direct Connect link that points to Amazon S3 Advertise a default route to AWS using BGP.
B.    Create a private interface on your AWS Direct Connect link. Configure a static route via your AWS Direct connect link that points to Amazon S3 Configure specific routes to your network in your VPC.
C.    Create a public interface on your AWS Direct Connect link Redistribute BGP routes into your existing routing infrastructure advertise specific routes for your network to AWS.
D.    Create a private interface on your AWS Direct connect link. Redistribute BGP routes into your existing routing infrastructure and advertise a default route to AWS.

Answer: C
Explanation:
https://aws.amazon.com/directconnect/faqs/

QUESTION 320
You have deployed a three-tier web application in a VPC with a CIOR block of 10 0 0 0/28 You initially deploy two web servers, two application servers, two database servers and one NAT instance tor a total of seven EC2 instances The web. Application and database servers are deployed across two availability zones (AZs). You also deploy an ELB in front of the two web servers, and use Route53 for DNS Web (raffle gradually increases in the first few days following the deployment, so you attempt to double the number of instances in each tier of the application to handle the new load unfortunately some of these new instances fail to launch.
Which of the following could De the root caused? (Choose 2 answers)

A.    The Internet Gateway (IGW) of your VPC has scaled-up adding more instances to handle the traffic spike, reducing the number of available private IP addresses for new instance launches.
B.    AWS reserves one IP address In each subnet’s CIDR block for Route53 so you do not have enough addresses left to launch all of the new EC2 instances.
C.    AWS reserves the first and the last private IP address in each subnet’s CIDR block so you do not have enough addresses left to launch all of the new EC2 instances.
D.    The ELB has scaled-up. Adding more instances to handle the traffic reducing the number of available private IP addresses for new instance launches.
E.    AWS reserves the first tour and the last IP address in each subnet’s CIDR block so you do not have enough addresses left to launch all of the new EC2 instances.

Answer: DE
Explanation:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html

QUESTION 321
You’ve been brought in as solutions architect to assist an enterprise customer with their migration of an e-commerce platform to Amazon Virtual Private Cloud (VPC) The previous architect has already deployed a 3-tier VPC.
The configuration is as follows:
VPC vpc-2f8t>C447
IGVV ig-2d8bc445
NACL acl-2080c448
Subnets and Route Tables:
Web server’s subnet-258Dc44d
Application server’s suDnet-248bc44c
Database server’s subnet-9189c6f9
Route Tables:
rrb-218DC449
rtb-238bc44b
Associations:
subnet-258bc44d: rtb-2i8bc449
Subnet-248DC44C rtb-238tX44b
subnet-9189c6f9 rtb-238Dc 44b
You are now ready to begin deploying EC2 instances into the VPC Web servers must have direct access to the internet Application and database servers cannot have direct access to the internet. Which configuration below will allow you the ability to remotely administer your application and database servers, as well as allow these servers to retrieve updates from the Internet?

A.    Create a bastion and NAT Instance in subnet-248bc44c and add a route from rtb-238bc44b to subnet-258bc44d.
B.    Add a route from rtD-238bc44D to igw-2d8bc445 and add a bastion and NAT instance within suonet-248bc44c.
C.    Create a bastion and MAT Instance In subnet-258bc44d. Add a route from rtb-238bc44b to igw-2d8bc445. And a new NACL that allows access between subnet-258bc44d and subnet-248bc44c.
D.    Create a bastion and mat instance in suDnet-258Dc44d and add a route from rtD-238Dc44D to the mat instance.

Answer: D
Explanation:
Create NAT instance in public subnet which is web server subnet (suDnet-258Dc44d) and add route (rtD-238Dc44D) from private subnet (database subnet-9189c6f9) to the public NAT one to retrieve the updates.

QUESTION 322
You are designing Internet connectivity for your VPC. The Web servers must be available on the Internet. The application must have a highly available architecture. Which alternatives should you consider? (Choose 2 answers)

A.    Configure a NAT instance in your VPC Create a default route via the NAT instance and associate it with all subnets Configure a DNS A record that points to the NAT instance public IP address.
B.    Configure a CloudFront distribution and configure the origin to point to the private IP addresses of your Web servers Configure a Route53 CNAME record to your CloudFront distribution.
C.    Place all your web servers behind EL8 Configure a Route53 CNMIE to point to the ELB DNS name.
D.    Assign BPs to all web servers. Configure a Route53 record set with all EIPs. With health checks and DNS failover.
E.    Configure ELB with an EIP Place all your Web servers behind ELB Configure a Route53 A record that points to the EIP.

Answer: CD

QUESTION 323
You are tasked with moving a legacy application from a virtual machine running Inside your datacenter to an Amazon VPC Unfortunately this app requires access to a number of on-premises services and no one who configured the app still works for your company. Even worse there’s no documentation for it. What will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured? (Choose 3 answers)

A.    An AWS Direct Connect link between the VPC and the network housing the internal services.
B.    An Internet Gateway to allow a VPN connection.
C.    An Elastic IP address on the VPC instance
D.    An IP address space that does not conflict with the one on-premises
E.    Entries in Amazon Route 53 that allow the Instance to resolve its dependencies’ IP addresses
F.    A VM Import of the current virtual machine

Answer: ADF

QUESTION 324
You are migrating a legacy client-server application to AWS The application responds to a specific DNS domain (e g www example com) and has a 2-tier architecture, with multiple application servers and a database server Remote clients use TCP to connect to the application servers. The application servers need to know the IP address of the clients in order to function properly and are currently taking that information from the TCP socket A Multi-AZ RDS MySQL instance will be used for the database. During the migration you can change the application code but you have to file a change request. How would you implement the architecture on AWS In order to maximize scalability and high ability?

A.    File a change request to implement Proxy Protocol support In the application Use an EL8 with a TCP Listener and Proxy Protocol enabled to distribute load on two application servers in different AZs.
B.    File a change request to Implement Cross-Zone support in the application Use an EL8 with a TCP Listener and Cross-Zone Load Balancing enabled, two application servers in different AZs.
C.    File a change request to implement Latency Based Routing support in the application Use Route 53 with Latency Based Routing enabled to distribute load on two application servers in different AZs.
D.    File a change request to implement Alias Resource support in the application Use Route 53 Alias Resource Record to distribute load on two application servers in different AZs.

Answer: A
Explanation:
https://aws.amazon.com/blogs/aws/elastic-load-balancing-adds-support-for-proxy-protocol/

QUESTION 325
A newspaper organization has a on-premises application which allows the public to search its back catalogue and retrieve individual newspaper pages via a website written in Java They have scanned the old newspapers into JPEGs (approx 17TB) and used Optical Character Recognition (OCR) to populate a commercial search product. The hosting platform and software are now end of life and the organization wants to migrate Its archive to AWS and produce a cost efficient architecture and still be designed for availability and durability Which is the most appropriate?

A.    Use S3 with reduced redundancy lo store and serve the scanned files, install the commercial search application on EC2 Instances and configure with auto-scaling and an Elastic Load Balancer.
B.    Model the environment using CloudFormation use an EC2 instance running Apache webserver and an open source search application, stripe multiple standard EBS volumes together to store the JPEGs and search index.
C.    Use S3 with standard redundancy to store and serve the scanned files, use CloudSearch for query processing, and use Elastic Beanstalk to host the website across multiple availability zones.
D.    Use a single-AZ RDS MySQL instance lo store the search index 33d the JPEG images use an EC2 instance to serve the website and translate user queries into SQL.
E.    Use a CloudFront download distribution to serve the JPEGs to the end users and Install the current commercial search product, along with a Java Container Tor the website on EC2 instances and use Route53 with DNS round-robin.

Answer: C
Explanation:
Cloud search is the perfect option for the search related content.

Lead2pass is no doubt your best choice. Using the Amazon AWS CERTIFIED SOLUTIONS ARCHITECT – ASSOCIATE exam dumps can let you improve the efficiency of your studying so that it can help you save much more time.

AWS Certified Solutions Architect – Associate new questions on Google Drive: https://drive.google.com/open?id=0B3Syig5i8gpDNlBGazRSTENUQW8

2016 Amazon AWS Certified Solutions Architect – Associate exam dumps (All 423 Q&As) from Lead2pass:

http://www.lead2pass.com/aws-certified-solutions-architect-associate.html [100% Exam Pass Guaranteed]

You may also like